00:00:00.000 Started by upstream project "autotest-nightly-lts" build number 2357 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3618 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.118 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.119 The recommended git tool is: git 00:00:00.120 using credential 00000000-0000-0000-0000-000000000002 00:00:00.134 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.172 Fetching changes from the remote Git repository 00:00:00.174 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.210 Using shallow fetch with depth 1 00:00:00.210 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.210 > git --version # timeout=10 00:00:00.246 > git --version # 'git version 2.39.2' 00:00:00.246 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.281 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.281 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.824 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.836 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.848 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:06.848 > git config core.sparsecheckout # timeout=10 00:00:06.860 > git read-tree -mu HEAD # timeout=10 00:00:06.875 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:06.893 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:06.893 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:07.271 [Pipeline] Start of Pipeline 00:00:07.290 [Pipeline] library 00:00:07.292 Loading library shm_lib@master 00:00:07.293 Library shm_lib@master is cached. Copying from home. 00:00:07.311 [Pipeline] node 00:00:07.327 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:07.329 [Pipeline] { 00:00:07.339 [Pipeline] catchError 00:00:07.341 [Pipeline] { 00:00:07.354 [Pipeline] wrap 00:00:07.364 [Pipeline] { 00:00:07.372 [Pipeline] stage 00:00:07.373 [Pipeline] { (Prologue) 00:00:07.391 [Pipeline] echo 00:00:07.392 Node: VM-host-SM0 00:00:07.398 [Pipeline] cleanWs 00:00:07.410 [WS-CLEANUP] Deleting project workspace... 00:00:07.410 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.417 [WS-CLEANUP] done 00:00:07.604 [Pipeline] setCustomBuildProperty 00:00:07.686 [Pipeline] httpRequest 00:00:08.862 [Pipeline] echo 00:00:08.863 Sorcerer 10.211.164.101 is alive 00:00:08.872 [Pipeline] retry 00:00:08.874 [Pipeline] { 00:00:08.887 [Pipeline] httpRequest 00:00:08.891 HttpMethod: GET 00:00:08.892 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.893 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.905 Response Code: HTTP/1.1 200 OK 00:00:08.906 Success: Status code 200 is in the accepted range: 200,404 00:00:08.906 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:32.684 [Pipeline] } 00:00:32.702 [Pipeline] // retry 00:00:32.711 [Pipeline] sh 00:00:32.994 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:33.011 [Pipeline] httpRequest 00:00:33.438 [Pipeline] echo 00:00:33.440 Sorcerer 10.211.164.101 is alive 00:00:33.449 [Pipeline] retry 00:00:33.451 [Pipeline] { 00:00:33.465 [Pipeline] httpRequest 00:00:33.469 HttpMethod: GET 00:00:33.470 URL: http://10.211.164.101/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:33.470 Sending request to url: http://10.211.164.101/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:33.477 Response Code: HTTP/1.1 200 OK 00:00:33.477 Success: Status code 200 is in the accepted range: 200,404 00:00:33.478 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:20.426 [Pipeline] } 00:01:20.443 [Pipeline] // retry 00:01:20.451 [Pipeline] sh 00:01:20.736 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:23.282 [Pipeline] sh 00:01:23.561 + git -C spdk log --oneline -n5 00:01:23.561 c13c99a5e test: Various fixes for Fedora40 00:01:23.561 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:23.561 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:23.561 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:23.561 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:23.582 [Pipeline] writeFile 00:01:23.598 [Pipeline] sh 00:01:23.880 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:23.892 [Pipeline] sh 00:01:24.175 + cat autorun-spdk.conf 00:01:24.175 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.175 SPDK_TEST_NVMF=1 00:01:24.175 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:24.175 SPDK_TEST_VFIOUSER=1 00:01:24.175 SPDK_TEST_USDT=1 00:01:24.175 SPDK_RUN_UBSAN=1 00:01:24.175 SPDK_TEST_NVMF_MDNS=1 00:01:24.175 NET_TYPE=virt 00:01:24.175 SPDK_JSONRPC_GO_CLIENT=1 00:01:24.175 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:24.183 RUN_NIGHTLY=1 00:01:24.185 [Pipeline] } 00:01:24.202 [Pipeline] // stage 00:01:24.221 [Pipeline] stage 00:01:24.223 [Pipeline] { (Run VM) 00:01:24.237 [Pipeline] sh 00:01:24.571 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:24.571 + echo 'Start stage prepare_nvme.sh' 00:01:24.571 Start stage prepare_nvme.sh 00:01:24.571 + [[ -n 2 ]] 00:01:24.571 + disk_prefix=ex2 00:01:24.571 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:01:24.571 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:01:24.571 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:01:24.571 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.571 ++ SPDK_TEST_NVMF=1 00:01:24.571 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:24.571 ++ SPDK_TEST_VFIOUSER=1 00:01:24.571 ++ SPDK_TEST_USDT=1 00:01:24.571 ++ SPDK_RUN_UBSAN=1 00:01:24.571 ++ SPDK_TEST_NVMF_MDNS=1 00:01:24.571 ++ NET_TYPE=virt 00:01:24.571 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:24.571 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:24.571 ++ RUN_NIGHTLY=1 00:01:24.571 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:24.571 + nvme_files=() 00:01:24.571 + declare -A nvme_files 00:01:24.571 + backend_dir=/var/lib/libvirt/images/backends 00:01:24.571 + nvme_files['nvme.img']=5G 00:01:24.571 + nvme_files['nvme-cmb.img']=5G 00:01:24.571 + nvme_files['nvme-multi0.img']=4G 00:01:24.571 + nvme_files['nvme-multi1.img']=4G 00:01:24.571 + nvme_files['nvme-multi2.img']=4G 00:01:24.571 + nvme_files['nvme-openstack.img']=8G 00:01:24.571 + nvme_files['nvme-zns.img']=5G 00:01:24.571 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:24.571 + (( SPDK_TEST_FTL == 1 )) 00:01:24.571 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:24.571 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:24.571 + for nvme in "${!nvme_files[@]}" 00:01:24.571 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:24.571 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:24.571 + for nvme in "${!nvme_files[@]}" 00:01:24.571 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:24.571 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:24.571 + for nvme in "${!nvme_files[@]}" 00:01:24.571 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:24.571 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:24.571 + for nvme in "${!nvme_files[@]}" 00:01:24.571 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:24.571 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:24.571 + for nvme in "${!nvme_files[@]}" 00:01:24.571 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:24.571 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:24.571 + for nvme in "${!nvme_files[@]}" 00:01:24.571 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:24.571 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:24.571 + for nvme in "${!nvme_files[@]}" 00:01:24.571 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:24.839 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:24.839 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:24.839 + echo 'End stage prepare_nvme.sh' 00:01:24.839 End stage prepare_nvme.sh 00:01:24.849 [Pipeline] sh 00:01:25.125 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:25.125 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora39 00:01:25.125 00:01:25.125 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:01:25.125 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:01:25.125 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:25.125 HELP=0 00:01:25.125 DRY_RUN=0 00:01:25.125 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:01:25.125 NVME_DISKS_TYPE=nvme,nvme, 00:01:25.125 NVME_AUTO_CREATE=0 00:01:25.125 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:01:25.125 NVME_CMB=,, 00:01:25.125 NVME_PMR=,, 00:01:25.125 NVME_ZNS=,, 00:01:25.125 NVME_MS=,, 00:01:25.125 NVME_FDP=,, 00:01:25.125 SPDK_VAGRANT_DISTRO=fedora39 00:01:25.125 SPDK_VAGRANT_VMCPU=10 00:01:25.125 SPDK_VAGRANT_VMRAM=12288 00:01:25.125 SPDK_VAGRANT_PROVIDER=libvirt 00:01:25.125 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:25.125 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:25.125 SPDK_OPENSTACK_NETWORK=0 00:01:25.125 VAGRANT_PACKAGE_BOX=0 00:01:25.125 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:25.125 FORCE_DISTRO=true 00:01:25.125 VAGRANT_BOX_VERSION= 00:01:25.125 EXTRA_VAGRANTFILES= 00:01:25.125 NIC_MODEL=e1000 00:01:25.125 00:01:25.125 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt' 00:01:25.125 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:27.653 Bringing machine 'default' up with 'libvirt' provider... 00:01:28.588 ==> default: Creating image (snapshot of base box volume). 00:01:28.588 ==> default: Creating domain with the following settings... 00:01:28.588 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731037503_10807a7e2dbcbc586ab9 00:01:28.588 ==> default: -- Domain type: kvm 00:01:28.588 ==> default: -- Cpus: 10 00:01:28.588 ==> default: -- Feature: acpi 00:01:28.588 ==> default: -- Feature: apic 00:01:28.588 ==> default: -- Feature: pae 00:01:28.588 ==> default: -- Memory: 12288M 00:01:28.588 ==> default: -- Memory Backing: hugepages: 00:01:28.588 ==> default: -- Management MAC: 00:01:28.588 ==> default: -- Loader: 00:01:28.588 ==> default: -- Nvram: 00:01:28.588 ==> default: -- Base box: spdk/fedora39 00:01:28.588 ==> default: -- Storage pool: default 00:01:28.588 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731037503_10807a7e2dbcbc586ab9.img (20G) 00:01:28.588 ==> default: -- Volume Cache: default 00:01:28.588 ==> default: -- Kernel: 00:01:28.588 ==> default: -- Initrd: 00:01:28.588 ==> default: -- Graphics Type: vnc 00:01:28.588 ==> default: -- Graphics Port: -1 00:01:28.588 ==> default: -- Graphics IP: 127.0.0.1 00:01:28.588 ==> default: -- Graphics Password: Not defined 00:01:28.588 ==> default: -- Video Type: cirrus 00:01:28.588 ==> default: -- Video VRAM: 9216 00:01:28.588 ==> default: -- Sound Type: 00:01:28.588 ==> default: -- Keymap: en-us 00:01:28.588 ==> default: -- TPM Path: 00:01:28.588 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:28.588 ==> default: -- Command line args: 00:01:28.588 ==> default: -> value=-device, 00:01:28.588 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:28.588 ==> default: -> value=-drive, 00:01:28.588 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:01:28.588 ==> default: -> value=-device, 00:01:28.588 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:28.588 ==> default: -> value=-device, 00:01:28.588 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:28.588 ==> default: -> value=-drive, 00:01:28.588 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:28.588 ==> default: -> value=-device, 00:01:28.588 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:28.588 ==> default: -> value=-drive, 00:01:28.588 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:28.588 ==> default: -> value=-device, 00:01:28.588 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:28.588 ==> default: -> value=-drive, 00:01:28.588 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:28.588 ==> default: -> value=-device, 00:01:28.588 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:28.588 ==> default: Creating shared folders metadata... 00:01:28.588 ==> default: Starting domain. 00:01:30.493 ==> default: Waiting for domain to get an IP address... 00:01:48.592 ==> default: Waiting for SSH to become available... 00:01:48.592 ==> default: Configuring and enabling network interfaces... 00:01:51.123 default: SSH address: 192.168.121.201:22 00:01:51.123 default: SSH username: vagrant 00:01:51.123 default: SSH auth method: private key 00:01:53.655 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:01.830 ==> default: Mounting SSHFS shared folder... 00:02:02.398 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:02.398 ==> default: Checking Mount.. 00:02:03.777 ==> default: Folder Successfully Mounted! 00:02:03.777 ==> default: Running provisioner: file... 00:02:04.725 default: ~/.gitconfig => .gitconfig 00:02:04.991 00:02:04.991 SUCCESS! 00:02:04.991 00:02:04.991 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:04.991 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:04.991 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:04.991 00:02:05.000 [Pipeline] } 00:02:05.016 [Pipeline] // stage 00:02:05.025 [Pipeline] dir 00:02:05.026 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt 00:02:05.027 [Pipeline] { 00:02:05.041 [Pipeline] catchError 00:02:05.042 [Pipeline] { 00:02:05.055 [Pipeline] sh 00:02:05.336 + vagrant ssh-config --host vagrant 00:02:05.336 + sed -ne /^Host/,$p 00:02:05.336 + tee ssh_conf 00:02:08.624 Host vagrant 00:02:08.624 HostName 192.168.121.201 00:02:08.624 User vagrant 00:02:08.624 Port 22 00:02:08.624 UserKnownHostsFile /dev/null 00:02:08.624 StrictHostKeyChecking no 00:02:08.624 PasswordAuthentication no 00:02:08.624 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:08.624 IdentitiesOnly yes 00:02:08.624 LogLevel FATAL 00:02:08.624 ForwardAgent yes 00:02:08.624 ForwardX11 yes 00:02:08.624 00:02:08.638 [Pipeline] withEnv 00:02:08.641 [Pipeline] { 00:02:08.657 [Pipeline] sh 00:02:08.940 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:08.940 source /etc/os-release 00:02:08.940 [[ -e /image.version ]] && img=$(< /image.version) 00:02:08.940 # Minimal, systemd-like check. 00:02:08.940 if [[ -e /.dockerenv ]]; then 00:02:08.940 # Clear garbage from the node's name: 00:02:08.940 # agt-er_autotest_547-896 -> autotest_547-896 00:02:08.940 # $HOSTNAME is the actual container id 00:02:08.940 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:08.940 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:08.940 # We can assume this is a mount from a host where container is running, 00:02:08.940 # so fetch its hostname to easily identify the target swarm worker. 00:02:08.940 container="$(< /etc/hostname) ($agent)" 00:02:08.940 else 00:02:08.940 # Fallback 00:02:08.940 container=$agent 00:02:08.940 fi 00:02:08.940 fi 00:02:08.940 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:08.940 00:02:09.211 [Pipeline] } 00:02:09.227 [Pipeline] // withEnv 00:02:09.237 [Pipeline] setCustomBuildProperty 00:02:09.255 [Pipeline] stage 00:02:09.258 [Pipeline] { (Tests) 00:02:09.281 [Pipeline] sh 00:02:09.564 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:09.837 [Pipeline] sh 00:02:10.119 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:10.393 [Pipeline] timeout 00:02:10.394 Timeout set to expire in 1 hr 0 min 00:02:10.396 [Pipeline] { 00:02:10.412 [Pipeline] sh 00:02:10.695 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:11.264 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:02:11.276 [Pipeline] sh 00:02:11.558 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:11.837 [Pipeline] sh 00:02:12.147 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:12.424 [Pipeline] sh 00:02:12.706 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:12.706 ++ readlink -f spdk_repo 00:02:12.706 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:12.706 + [[ -n /home/vagrant/spdk_repo ]] 00:02:12.706 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:12.706 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:12.706 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:12.706 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:12.706 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:12.706 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:12.706 + cd /home/vagrant/spdk_repo 00:02:12.706 + source /etc/os-release 00:02:12.706 ++ NAME='Fedora Linux' 00:02:12.706 ++ VERSION='39 (Cloud Edition)' 00:02:12.706 ++ ID=fedora 00:02:12.706 ++ VERSION_ID=39 00:02:12.706 ++ VERSION_CODENAME= 00:02:12.706 ++ PLATFORM_ID=platform:f39 00:02:12.706 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:12.706 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:12.706 ++ LOGO=fedora-logo-icon 00:02:12.706 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:12.706 ++ HOME_URL=https://fedoraproject.org/ 00:02:12.706 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:12.706 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:12.706 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:12.706 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:12.706 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:12.706 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:12.706 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:12.706 ++ SUPPORT_END=2024-11-12 00:02:12.706 ++ VARIANT='Cloud Edition' 00:02:12.706 ++ VARIANT_ID=cloud 00:02:12.706 + uname -a 00:02:12.706 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:12.706 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:12.967 Hugepages 00:02:12.967 node hugesize free / total 00:02:12.967 node0 1048576kB 0 / 0 00:02:12.967 node0 2048kB 0 / 0 00:02:12.967 00:02:12.967 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:12.967 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:12.967 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:12.967 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:12.967 + rm -f /tmp/spdk-ld-path 00:02:12.967 + source autorun-spdk.conf 00:02:12.967 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:12.967 ++ SPDK_TEST_NVMF=1 00:02:12.967 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:12.967 ++ SPDK_TEST_VFIOUSER=1 00:02:12.967 ++ SPDK_TEST_USDT=1 00:02:12.967 ++ SPDK_RUN_UBSAN=1 00:02:12.967 ++ SPDK_TEST_NVMF_MDNS=1 00:02:12.967 ++ NET_TYPE=virt 00:02:12.967 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:12.967 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:12.967 ++ RUN_NIGHTLY=1 00:02:12.967 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:12.967 + [[ -n '' ]] 00:02:12.967 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:12.967 + for M in /var/spdk/build-*-manifest.txt 00:02:12.967 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:12.967 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:13.227 + for M in /var/spdk/build-*-manifest.txt 00:02:13.227 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:13.227 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:13.227 + for M in /var/spdk/build-*-manifest.txt 00:02:13.227 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:13.227 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:13.227 ++ uname 00:02:13.227 + [[ Linux == \L\i\n\u\x ]] 00:02:13.227 + sudo dmesg -T 00:02:13.227 + sudo dmesg --clear 00:02:13.227 + dmesg_pid=5239 00:02:13.227 + [[ Fedora Linux == FreeBSD ]] 00:02:13.227 + sudo dmesg -Tw 00:02:13.227 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:13.227 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:13.227 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:13.227 + [[ -x /usr/src/fio-static/fio ]] 00:02:13.227 + export FIO_BIN=/usr/src/fio-static/fio 00:02:13.227 + FIO_BIN=/usr/src/fio-static/fio 00:02:13.227 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:13.227 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:13.227 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:13.227 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:13.227 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:13.227 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:13.227 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:13.227 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:13.227 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:13.227 Test configuration: 00:02:13.227 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:13.227 SPDK_TEST_NVMF=1 00:02:13.227 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:13.227 SPDK_TEST_VFIOUSER=1 00:02:13.227 SPDK_TEST_USDT=1 00:02:13.227 SPDK_RUN_UBSAN=1 00:02:13.227 SPDK_TEST_NVMF_MDNS=1 00:02:13.227 NET_TYPE=virt 00:02:13.227 SPDK_JSONRPC_GO_CLIENT=1 00:02:13.227 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:13.227 RUN_NIGHTLY=1 03:45:48 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:13.227 03:45:48 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:13.227 03:45:48 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:13.227 03:45:48 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:13.227 03:45:48 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:13.227 03:45:48 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.227 03:45:48 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.227 03:45:48 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.227 03:45:48 -- paths/export.sh@5 -- $ export PATH 00:02:13.227 03:45:48 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.227 03:45:48 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:13.227 03:45:48 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:13.227 03:45:48 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1731037548.XXXXXX 00:02:13.227 03:45:48 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1731037548.kL49sW 00:02:13.227 03:45:48 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:13.227 03:45:48 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:02:13.227 03:45:48 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:13.227 03:45:48 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:13.228 03:45:48 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:13.228 03:45:48 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:13.228 03:45:48 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:13.228 03:45:48 -- common/autotest_common.sh@10 -- $ set +x 00:02:13.228 03:45:48 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang' 00:02:13.228 03:45:48 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:13.228 03:45:48 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:13.228 03:45:48 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:13.228 03:45:48 -- spdk/autobuild.sh@16 -- $ date -u 00:02:13.228 Fri Nov 8 03:45:48 AM UTC 2024 00:02:13.228 03:45:48 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:13.228 LTS-67-gc13c99a5e 00:02:13.228 03:45:48 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:13.228 03:45:48 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:13.228 03:45:48 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:13.228 03:45:48 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:13.228 03:45:48 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:13.228 03:45:48 -- common/autotest_common.sh@10 -- $ set +x 00:02:13.228 ************************************ 00:02:13.228 START TEST ubsan 00:02:13.228 ************************************ 00:02:13.228 using ubsan 00:02:13.228 03:45:48 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:13.228 00:02:13.228 real 0m0.000s 00:02:13.228 user 0m0.000s 00:02:13.228 sys 0m0.000s 00:02:13.228 ************************************ 00:02:13.228 END TEST ubsan 00:02:13.228 ************************************ 00:02:13.228 03:45:48 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:13.228 03:45:48 -- common/autotest_common.sh@10 -- $ set +x 00:02:13.488 03:45:48 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:13.488 03:45:48 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:13.488 03:45:48 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:13.488 03:45:48 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:13.488 03:45:48 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:13.488 03:45:48 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:13.488 03:45:48 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:13.488 03:45:48 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:13.488 03:45:48 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang --with-shared 00:02:13.488 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:13.488 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:14.057 Using 'verbs' RDMA provider 00:02:29.509 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:41.716 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:41.716 go version go1.21.1 linux/amd64 00:02:41.716 Creating mk/config.mk...done. 00:02:41.716 Creating mk/cc.flags.mk...done. 00:02:41.716 Type 'make' to build. 00:02:41.716 03:46:16 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:41.716 03:46:16 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:41.716 03:46:16 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:41.716 03:46:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:41.716 ************************************ 00:02:41.716 START TEST make 00:02:41.716 ************************************ 00:02:41.716 03:46:16 -- common/autotest_common.sh@1114 -- $ make -j10 00:02:41.716 make[1]: Nothing to be done for 'all'. 00:02:43.092 The Meson build system 00:02:43.092 Version: 1.5.0 00:02:43.092 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:02:43.092 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:43.092 Build type: native build 00:02:43.092 Project name: libvfio-user 00:02:43.092 Project version: 0.0.1 00:02:43.092 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:43.092 C linker for the host machine: cc ld.bfd 2.40-14 00:02:43.092 Host machine cpu family: x86_64 00:02:43.092 Host machine cpu: x86_64 00:02:43.092 Run-time dependency threads found: YES 00:02:43.092 Library dl found: YES 00:02:43.092 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:43.092 Run-time dependency json-c found: YES 0.17 00:02:43.092 Run-time dependency cmocka found: YES 1.1.7 00:02:43.092 Program pytest-3 found: NO 00:02:43.092 Program flake8 found: NO 00:02:43.092 Program misspell-fixer found: NO 00:02:43.092 Program restructuredtext-lint found: NO 00:02:43.092 Program valgrind found: YES (/usr/bin/valgrind) 00:02:43.092 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:43.092 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:43.092 Compiler for C supports arguments -Wwrite-strings: YES 00:02:43.092 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:43.092 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:02:43.092 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:02:43.092 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:43.092 Build targets in project: 8 00:02:43.092 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:43.092 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:43.092 00:02:43.092 libvfio-user 0.0.1 00:02:43.092 00:02:43.092 User defined options 00:02:43.092 buildtype : debug 00:02:43.092 default_library: shared 00:02:43.092 libdir : /usr/local/lib 00:02:43.092 00:02:43.092 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:43.351 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:43.609 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:43.609 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:43.609 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:43.609 [4/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:43.609 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:43.609 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:43.609 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:43.609 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:43.609 [9/37] Compiling C object samples/null.p/null.c.o 00:02:43.609 [10/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:43.609 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:43.609 [12/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:43.609 [13/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:43.609 [14/37] Compiling C object samples/server.p/server.c.o 00:02:43.609 [15/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:43.868 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:43.868 [17/37] Compiling C object samples/client.p/client.c.o 00:02:43.868 [18/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:43.868 [19/37] Linking target samples/client 00:02:43.868 [20/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:43.868 [21/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:43.868 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:43.868 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:43.868 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:43.868 [25/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:43.868 [26/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:43.868 [27/37] Linking target lib/libvfio-user.so.0.0.1 00:02:43.868 [28/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:43.868 [29/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:44.126 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:44.126 [31/37] Linking target test/unit_tests 00:02:44.126 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:44.126 [33/37] Linking target samples/lspci 00:02:44.126 [34/37] Linking target samples/gpio-pci-idio-16 00:02:44.126 [35/37] Linking target samples/shadow_ioeventfd_server 00:02:44.126 [36/37] Linking target samples/null 00:02:44.126 [37/37] Linking target samples/server 00:02:44.126 INFO: autodetecting backend as ninja 00:02:44.126 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:44.126 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:44.708 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:44.708 ninja: no work to do. 00:02:52.820 The Meson build system 00:02:52.820 Version: 1.5.0 00:02:52.820 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:52.820 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:52.820 Build type: native build 00:02:52.820 Program cat found: YES (/usr/bin/cat) 00:02:52.820 Project name: DPDK 00:02:52.820 Project version: 23.11.0 00:02:52.820 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:52.820 C linker for the host machine: cc ld.bfd 2.40-14 00:02:52.820 Host machine cpu family: x86_64 00:02:52.820 Host machine cpu: x86_64 00:02:52.820 Message: ## Building in Developer Mode ## 00:02:52.820 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:52.820 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:52.820 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:52.820 Program python3 found: YES (/usr/bin/python3) 00:02:52.820 Program cat found: YES (/usr/bin/cat) 00:02:52.820 Compiler for C supports arguments -march=native: YES 00:02:52.820 Checking for size of "void *" : 8 00:02:52.820 Checking for size of "void *" : 8 (cached) 00:02:52.820 Library m found: YES 00:02:52.820 Library numa found: YES 00:02:52.820 Has header "numaif.h" : YES 00:02:52.820 Library fdt found: NO 00:02:52.820 Library execinfo found: NO 00:02:52.820 Has header "execinfo.h" : YES 00:02:52.820 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:52.820 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:52.820 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:52.820 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:52.820 Run-time dependency openssl found: YES 3.1.1 00:02:52.820 Run-time dependency libpcap found: YES 1.10.4 00:02:52.820 Has header "pcap.h" with dependency libpcap: YES 00:02:52.820 Compiler for C supports arguments -Wcast-qual: YES 00:02:52.820 Compiler for C supports arguments -Wdeprecated: YES 00:02:52.820 Compiler for C supports arguments -Wformat: YES 00:02:52.821 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:52.821 Compiler for C supports arguments -Wformat-security: NO 00:02:52.821 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:52.821 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:52.821 Compiler for C supports arguments -Wnested-externs: YES 00:02:52.821 Compiler for C supports arguments -Wold-style-definition: YES 00:02:52.821 Compiler for C supports arguments -Wpointer-arith: YES 00:02:52.821 Compiler for C supports arguments -Wsign-compare: YES 00:02:52.821 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:52.821 Compiler for C supports arguments -Wundef: YES 00:02:52.821 Compiler for C supports arguments -Wwrite-strings: YES 00:02:52.821 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:52.821 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:52.821 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:52.821 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:52.821 Program objdump found: YES (/usr/bin/objdump) 00:02:52.821 Compiler for C supports arguments -mavx512f: YES 00:02:52.821 Checking if "AVX512 checking" compiles: YES 00:02:52.821 Fetching value of define "__SSE4_2__" : 1 00:02:52.821 Fetching value of define "__AES__" : 1 00:02:52.821 Fetching value of define "__AVX__" : 1 00:02:52.821 Fetching value of define "__AVX2__" : 1 00:02:52.821 Fetching value of define "__AVX512BW__" : (undefined) 00:02:52.821 Fetching value of define "__AVX512CD__" : (undefined) 00:02:52.821 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:52.821 Fetching value of define "__AVX512F__" : (undefined) 00:02:52.821 Fetching value of define "__AVX512VL__" : (undefined) 00:02:52.821 Fetching value of define "__PCLMUL__" : 1 00:02:52.821 Fetching value of define "__RDRND__" : 1 00:02:52.821 Fetching value of define "__RDSEED__" : 1 00:02:52.821 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:52.821 Fetching value of define "__znver1__" : (undefined) 00:02:52.821 Fetching value of define "__znver2__" : (undefined) 00:02:52.821 Fetching value of define "__znver3__" : (undefined) 00:02:52.821 Fetching value of define "__znver4__" : (undefined) 00:02:52.821 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:52.821 Message: lib/log: Defining dependency "log" 00:02:52.821 Message: lib/kvargs: Defining dependency "kvargs" 00:02:52.821 Message: lib/telemetry: Defining dependency "telemetry" 00:02:52.821 Checking for function "getentropy" : NO 00:02:52.821 Message: lib/eal: Defining dependency "eal" 00:02:52.821 Message: lib/ring: Defining dependency "ring" 00:02:52.821 Message: lib/rcu: Defining dependency "rcu" 00:02:52.821 Message: lib/mempool: Defining dependency "mempool" 00:02:52.821 Message: lib/mbuf: Defining dependency "mbuf" 00:02:52.821 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:52.821 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:52.821 Compiler for C supports arguments -mpclmul: YES 00:02:52.821 Compiler for C supports arguments -maes: YES 00:02:52.821 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:52.821 Compiler for C supports arguments -mavx512bw: YES 00:02:52.821 Compiler for C supports arguments -mavx512dq: YES 00:02:52.821 Compiler for C supports arguments -mavx512vl: YES 00:02:52.821 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:52.821 Compiler for C supports arguments -mavx2: YES 00:02:52.821 Compiler for C supports arguments -mavx: YES 00:02:52.821 Message: lib/net: Defining dependency "net" 00:02:52.821 Message: lib/meter: Defining dependency "meter" 00:02:52.821 Message: lib/ethdev: Defining dependency "ethdev" 00:02:52.821 Message: lib/pci: Defining dependency "pci" 00:02:52.821 Message: lib/cmdline: Defining dependency "cmdline" 00:02:52.821 Message: lib/hash: Defining dependency "hash" 00:02:52.821 Message: lib/timer: Defining dependency "timer" 00:02:52.821 Message: lib/compressdev: Defining dependency "compressdev" 00:02:52.821 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:52.821 Message: lib/dmadev: Defining dependency "dmadev" 00:02:52.821 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:52.821 Message: lib/power: Defining dependency "power" 00:02:52.821 Message: lib/reorder: Defining dependency "reorder" 00:02:52.821 Message: lib/security: Defining dependency "security" 00:02:52.821 Has header "linux/userfaultfd.h" : YES 00:02:52.821 Has header "linux/vduse.h" : YES 00:02:52.821 Message: lib/vhost: Defining dependency "vhost" 00:02:52.821 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:52.821 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:52.821 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:52.821 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:52.821 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:52.821 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:52.821 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:52.821 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:52.821 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:52.821 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:52.821 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:52.821 Configuring doxy-api-html.conf using configuration 00:02:52.821 Configuring doxy-api-man.conf using configuration 00:02:52.821 Program mandb found: YES (/usr/bin/mandb) 00:02:52.821 Program sphinx-build found: NO 00:02:52.821 Configuring rte_build_config.h using configuration 00:02:52.821 Message: 00:02:52.821 ================= 00:02:52.821 Applications Enabled 00:02:52.821 ================= 00:02:52.821 00:02:52.821 apps: 00:02:52.821 00:02:52.821 00:02:52.821 Message: 00:02:52.821 ================= 00:02:52.821 Libraries Enabled 00:02:52.821 ================= 00:02:52.821 00:02:52.821 libs: 00:02:52.821 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:52.821 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:52.821 cryptodev, dmadev, power, reorder, security, vhost, 00:02:52.821 00:02:52.821 Message: 00:02:52.821 =============== 00:02:52.821 Drivers Enabled 00:02:52.821 =============== 00:02:52.821 00:02:52.821 common: 00:02:52.821 00:02:52.821 bus: 00:02:52.821 pci, vdev, 00:02:52.821 mempool: 00:02:52.821 ring, 00:02:52.821 dma: 00:02:52.821 00:02:52.821 net: 00:02:52.821 00:02:52.821 crypto: 00:02:52.821 00:02:52.821 compress: 00:02:52.821 00:02:52.821 vdpa: 00:02:52.821 00:02:52.821 00:02:52.821 Message: 00:02:52.821 ================= 00:02:52.821 Content Skipped 00:02:52.821 ================= 00:02:52.821 00:02:52.821 apps: 00:02:52.821 dumpcap: explicitly disabled via build config 00:02:52.821 graph: explicitly disabled via build config 00:02:52.821 pdump: explicitly disabled via build config 00:02:52.821 proc-info: explicitly disabled via build config 00:02:52.821 test-acl: explicitly disabled via build config 00:02:52.821 test-bbdev: explicitly disabled via build config 00:02:52.821 test-cmdline: explicitly disabled via build config 00:02:52.821 test-compress-perf: explicitly disabled via build config 00:02:52.821 test-crypto-perf: explicitly disabled via build config 00:02:52.821 test-dma-perf: explicitly disabled via build config 00:02:52.821 test-eventdev: explicitly disabled via build config 00:02:52.821 test-fib: explicitly disabled via build config 00:02:52.821 test-flow-perf: explicitly disabled via build config 00:02:52.821 test-gpudev: explicitly disabled via build config 00:02:52.821 test-mldev: explicitly disabled via build config 00:02:52.821 test-pipeline: explicitly disabled via build config 00:02:52.821 test-pmd: explicitly disabled via build config 00:02:52.821 test-regex: explicitly disabled via build config 00:02:52.821 test-sad: explicitly disabled via build config 00:02:52.821 test-security-perf: explicitly disabled via build config 00:02:52.821 00:02:52.821 libs: 00:02:52.821 metrics: explicitly disabled via build config 00:02:52.821 acl: explicitly disabled via build config 00:02:52.821 bbdev: explicitly disabled via build config 00:02:52.821 bitratestats: explicitly disabled via build config 00:02:52.821 bpf: explicitly disabled via build config 00:02:52.821 cfgfile: explicitly disabled via build config 00:02:52.821 distributor: explicitly disabled via build config 00:02:52.821 efd: explicitly disabled via build config 00:02:52.821 eventdev: explicitly disabled via build config 00:02:52.821 dispatcher: explicitly disabled via build config 00:02:52.821 gpudev: explicitly disabled via build config 00:02:52.821 gro: explicitly disabled via build config 00:02:52.821 gso: explicitly disabled via build config 00:02:52.821 ip_frag: explicitly disabled via build config 00:02:52.821 jobstats: explicitly disabled via build config 00:02:52.821 latencystats: explicitly disabled via build config 00:02:52.821 lpm: explicitly disabled via build config 00:02:52.821 member: explicitly disabled via build config 00:02:52.821 pcapng: explicitly disabled via build config 00:02:52.821 rawdev: explicitly disabled via build config 00:02:52.821 regexdev: explicitly disabled via build config 00:02:52.821 mldev: explicitly disabled via build config 00:02:52.821 rib: explicitly disabled via build config 00:02:52.821 sched: explicitly disabled via build config 00:02:52.821 stack: explicitly disabled via build config 00:02:52.821 ipsec: explicitly disabled via build config 00:02:52.821 pdcp: explicitly disabled via build config 00:02:52.821 fib: explicitly disabled via build config 00:02:52.821 port: explicitly disabled via build config 00:02:52.821 pdump: explicitly disabled via build config 00:02:52.821 table: explicitly disabled via build config 00:02:52.821 pipeline: explicitly disabled via build config 00:02:52.821 graph: explicitly disabled via build config 00:02:52.821 node: explicitly disabled via build config 00:02:52.821 00:02:52.821 drivers: 00:02:52.821 common/cpt: not in enabled drivers build config 00:02:52.821 common/dpaax: not in enabled drivers build config 00:02:52.821 common/iavf: not in enabled drivers build config 00:02:52.821 common/idpf: not in enabled drivers build config 00:02:52.821 common/mvep: not in enabled drivers build config 00:02:52.821 common/octeontx: not in enabled drivers build config 00:02:52.821 bus/auxiliary: not in enabled drivers build config 00:02:52.821 bus/cdx: not in enabled drivers build config 00:02:52.822 bus/dpaa: not in enabled drivers build config 00:02:52.822 bus/fslmc: not in enabled drivers build config 00:02:52.822 bus/ifpga: not in enabled drivers build config 00:02:52.822 bus/platform: not in enabled drivers build config 00:02:52.822 bus/vmbus: not in enabled drivers build config 00:02:52.822 common/cnxk: not in enabled drivers build config 00:02:52.822 common/mlx5: not in enabled drivers build config 00:02:52.822 common/nfp: not in enabled drivers build config 00:02:52.822 common/qat: not in enabled drivers build config 00:02:52.822 common/sfc_efx: not in enabled drivers build config 00:02:52.822 mempool/bucket: not in enabled drivers build config 00:02:52.822 mempool/cnxk: not in enabled drivers build config 00:02:52.822 mempool/dpaa: not in enabled drivers build config 00:02:52.822 mempool/dpaa2: not in enabled drivers build config 00:02:52.822 mempool/octeontx: not in enabled drivers build config 00:02:52.822 mempool/stack: not in enabled drivers build config 00:02:52.822 dma/cnxk: not in enabled drivers build config 00:02:52.822 dma/dpaa: not in enabled drivers build config 00:02:52.822 dma/dpaa2: not in enabled drivers build config 00:02:52.822 dma/hisilicon: not in enabled drivers build config 00:02:52.822 dma/idxd: not in enabled drivers build config 00:02:52.822 dma/ioat: not in enabled drivers build config 00:02:52.822 dma/skeleton: not in enabled drivers build config 00:02:52.822 net/af_packet: not in enabled drivers build config 00:02:52.822 net/af_xdp: not in enabled drivers build config 00:02:52.822 net/ark: not in enabled drivers build config 00:02:52.822 net/atlantic: not in enabled drivers build config 00:02:52.822 net/avp: not in enabled drivers build config 00:02:52.822 net/axgbe: not in enabled drivers build config 00:02:52.822 net/bnx2x: not in enabled drivers build config 00:02:52.822 net/bnxt: not in enabled drivers build config 00:02:52.822 net/bonding: not in enabled drivers build config 00:02:52.822 net/cnxk: not in enabled drivers build config 00:02:52.822 net/cpfl: not in enabled drivers build config 00:02:52.822 net/cxgbe: not in enabled drivers build config 00:02:52.822 net/dpaa: not in enabled drivers build config 00:02:52.822 net/dpaa2: not in enabled drivers build config 00:02:52.822 net/e1000: not in enabled drivers build config 00:02:52.822 net/ena: not in enabled drivers build config 00:02:52.822 net/enetc: not in enabled drivers build config 00:02:52.822 net/enetfec: not in enabled drivers build config 00:02:52.822 net/enic: not in enabled drivers build config 00:02:52.822 net/failsafe: not in enabled drivers build config 00:02:52.822 net/fm10k: not in enabled drivers build config 00:02:52.822 net/gve: not in enabled drivers build config 00:02:52.822 net/hinic: not in enabled drivers build config 00:02:52.822 net/hns3: not in enabled drivers build config 00:02:52.822 net/i40e: not in enabled drivers build config 00:02:52.822 net/iavf: not in enabled drivers build config 00:02:52.822 net/ice: not in enabled drivers build config 00:02:52.822 net/idpf: not in enabled drivers build config 00:02:52.822 net/igc: not in enabled drivers build config 00:02:52.822 net/ionic: not in enabled drivers build config 00:02:52.822 net/ipn3ke: not in enabled drivers build config 00:02:52.822 net/ixgbe: not in enabled drivers build config 00:02:52.822 net/mana: not in enabled drivers build config 00:02:52.822 net/memif: not in enabled drivers build config 00:02:52.822 net/mlx4: not in enabled drivers build config 00:02:52.822 net/mlx5: not in enabled drivers build config 00:02:52.822 net/mvneta: not in enabled drivers build config 00:02:52.822 net/mvpp2: not in enabled drivers build config 00:02:52.822 net/netvsc: not in enabled drivers build config 00:02:52.822 net/nfb: not in enabled drivers build config 00:02:52.822 net/nfp: not in enabled drivers build config 00:02:52.822 net/ngbe: not in enabled drivers build config 00:02:52.822 net/null: not in enabled drivers build config 00:02:52.822 net/octeontx: not in enabled drivers build config 00:02:52.822 net/octeon_ep: not in enabled drivers build config 00:02:52.822 net/pcap: not in enabled drivers build config 00:02:52.822 net/pfe: not in enabled drivers build config 00:02:52.822 net/qede: not in enabled drivers build config 00:02:52.822 net/ring: not in enabled drivers build config 00:02:52.822 net/sfc: not in enabled drivers build config 00:02:52.822 net/softnic: not in enabled drivers build config 00:02:52.822 net/tap: not in enabled drivers build config 00:02:52.822 net/thunderx: not in enabled drivers build config 00:02:52.822 net/txgbe: not in enabled drivers build config 00:02:52.822 net/vdev_netvsc: not in enabled drivers build config 00:02:52.822 net/vhost: not in enabled drivers build config 00:02:52.822 net/virtio: not in enabled drivers build config 00:02:52.822 net/vmxnet3: not in enabled drivers build config 00:02:52.822 raw/*: missing internal dependency, "rawdev" 00:02:52.822 crypto/armv8: not in enabled drivers build config 00:02:52.822 crypto/bcmfs: not in enabled drivers build config 00:02:52.822 crypto/caam_jr: not in enabled drivers build config 00:02:52.822 crypto/ccp: not in enabled drivers build config 00:02:52.822 crypto/cnxk: not in enabled drivers build config 00:02:52.822 crypto/dpaa_sec: not in enabled drivers build config 00:02:52.822 crypto/dpaa2_sec: not in enabled drivers build config 00:02:52.822 crypto/ipsec_mb: not in enabled drivers build config 00:02:52.822 crypto/mlx5: not in enabled drivers build config 00:02:52.822 crypto/mvsam: not in enabled drivers build config 00:02:52.822 crypto/nitrox: not in enabled drivers build config 00:02:52.822 crypto/null: not in enabled drivers build config 00:02:52.822 crypto/octeontx: not in enabled drivers build config 00:02:52.822 crypto/openssl: not in enabled drivers build config 00:02:52.822 crypto/scheduler: not in enabled drivers build config 00:02:52.822 crypto/uadk: not in enabled drivers build config 00:02:52.822 crypto/virtio: not in enabled drivers build config 00:02:52.822 compress/isal: not in enabled drivers build config 00:02:52.822 compress/mlx5: not in enabled drivers build config 00:02:52.822 compress/octeontx: not in enabled drivers build config 00:02:52.822 compress/zlib: not in enabled drivers build config 00:02:52.822 regex/*: missing internal dependency, "regexdev" 00:02:52.822 ml/*: missing internal dependency, "mldev" 00:02:52.822 vdpa/ifc: not in enabled drivers build config 00:02:52.822 vdpa/mlx5: not in enabled drivers build config 00:02:52.822 vdpa/nfp: not in enabled drivers build config 00:02:52.822 vdpa/sfc: not in enabled drivers build config 00:02:52.822 event/*: missing internal dependency, "eventdev" 00:02:52.822 baseband/*: missing internal dependency, "bbdev" 00:02:52.822 gpu/*: missing internal dependency, "gpudev" 00:02:52.822 00:02:52.822 00:02:52.822 Build targets in project: 85 00:02:52.822 00:02:52.822 DPDK 23.11.0 00:02:52.822 00:02:52.822 User defined options 00:02:52.822 buildtype : debug 00:02:52.822 default_library : shared 00:02:52.822 libdir : lib 00:02:52.822 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:52.822 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:02:52.822 c_link_args : 00:02:52.822 cpu_instruction_set: native 00:02:52.822 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:52.822 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:52.822 enable_docs : false 00:02:52.822 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:52.822 enable_kmods : false 00:02:52.822 tests : false 00:02:52.822 00:02:52.822 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:52.822 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:52.822 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:52.822 [2/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:52.822 [3/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:52.822 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:53.081 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:53.081 [6/265] Linking static target lib/librte_kvargs.a 00:02:53.081 [7/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:53.081 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:53.081 [9/265] Linking static target lib/librte_log.a 00:02:53.081 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:53.647 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.647 [12/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:53.647 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:53.647 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:53.647 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:53.905 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:53.905 [17/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:53.905 [18/265] Linking static target lib/librte_telemetry.a 00:02:53.905 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:53.905 [20/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.905 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:54.164 [22/265] Linking target lib/librte_log.so.24.0 00:02:54.164 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:54.164 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:54.164 [25/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:54.422 [26/265] Linking target lib/librte_kvargs.so.24.0 00:02:54.686 [27/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:54.686 [28/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.686 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:54.686 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:54.686 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:54.686 [32/265] Linking target lib/librte_telemetry.so.24.0 00:02:54.686 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:54.686 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:54.686 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:54.949 [36/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:54.949 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:54.949 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:55.206 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:55.206 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:55.206 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:55.206 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:55.206 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:55.543 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:55.543 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:55.543 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:55.802 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:55.802 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:56.060 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:56.060 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:56.060 [51/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:56.060 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:56.060 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:56.060 [54/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:56.318 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:56.318 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:56.318 [57/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:56.576 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:56.576 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:56.576 [60/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:56.834 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:56.834 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:56.834 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:56.834 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:57.092 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:57.092 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:57.092 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:57.092 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:57.350 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:57.608 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:57.608 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:57.608 [72/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:57.608 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:57.608 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:57.608 [75/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:57.608 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:57.608 [77/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:57.608 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:58.174 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:58.174 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:58.174 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:58.433 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:58.433 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:58.433 [84/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:58.433 [85/265] Linking static target lib/librte_ring.a 00:02:58.691 [86/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:58.691 [87/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:58.691 [88/265] Linking static target lib/librte_eal.a 00:02:58.691 [89/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:58.691 [90/265] Linking static target lib/librte_rcu.a 00:02:58.949 [91/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:58.949 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:58.949 [93/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.207 [94/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:59.207 [95/265] Linking static target lib/librte_mempool.a 00:02:59.464 [96/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.464 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:59.464 [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:59.465 [99/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:59.465 [100/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:59.465 [101/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:59.722 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:59.722 [103/265] Linking static target lib/librte_mbuf.a 00:03:00.289 [104/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:00.289 [105/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:00.289 [106/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:00.289 [107/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:00.289 [108/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:00.289 [109/265] Linking static target lib/librte_meter.a 00:03:00.289 [110/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:00.289 [111/265] Linking static target lib/librte_net.a 00:03:00.547 [112/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.547 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:00.805 [114/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.805 [115/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.063 [116/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.063 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:01.321 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:01.583 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:01.841 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:02.099 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:02.099 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:02.099 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:02.099 [124/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:02.099 [125/265] Linking static target lib/librte_pci.a 00:03:02.358 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:02.616 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:02.616 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:02.616 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:02.616 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:02.616 [131/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.616 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:02.874 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:02.874 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:02.874 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:02.874 [136/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:02.874 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:02.874 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:02.874 [139/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:02.874 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:03.132 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:03.132 [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:03.390 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:03.648 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:03.648 [145/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:03.648 [146/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:03.648 [147/265] Linking static target lib/librte_cmdline.a 00:03:03.648 [148/265] Linking static target lib/librte_ethdev.a 00:03:03.648 [149/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:03.906 [150/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:03.906 [151/265] Linking static target lib/librte_timer.a 00:03:03.906 [152/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:03.906 [153/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:03.906 [154/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:04.164 [155/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:04.422 [156/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:04.422 [157/265] Linking static target lib/librte_hash.a 00:03:04.422 [158/265] Linking static target lib/librte_compressdev.a 00:03:04.422 [159/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.422 [160/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:04.686 [161/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:04.686 [162/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:04.686 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:04.686 [164/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:05.251 [165/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:05.251 [166/265] Linking static target lib/librte_dmadev.a 00:03:05.251 [167/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:05.251 [168/265] Linking static target lib/librte_cryptodev.a 00:03:05.251 [169/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:05.251 [170/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:05.251 [171/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:05.251 [172/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.509 [173/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.509 [174/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.509 [175/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:05.767 [176/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.767 [177/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:05.767 [178/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:06.025 [179/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:06.025 [180/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:06.025 [181/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:06.283 [182/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:06.283 [183/265] Linking static target lib/librte_reorder.a 00:03:06.283 [184/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:06.283 [185/265] Linking static target lib/librte_power.a 00:03:06.542 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:06.542 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:06.800 [188/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:06.800 [189/265] Linking static target lib/librte_security.a 00:03:06.800 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:06.800 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.058 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:07.623 [193/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.623 [194/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:07.623 [195/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:07.623 [196/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:07.623 [197/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.623 [198/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.881 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:08.139 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:08.139 [201/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:08.139 [202/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:08.398 [203/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:08.398 [204/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:08.398 [205/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:08.398 [206/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:08.668 [207/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:08.668 [208/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:08.668 [209/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:08.668 [210/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:08.668 [211/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:08.668 [212/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:08.668 [213/265] Linking static target drivers/librte_bus_vdev.a 00:03:08.940 [214/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:08.940 [215/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:08.940 [216/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:08.940 [217/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:08.940 [218/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:08.940 [219/265] Linking static target drivers/librte_bus_pci.a 00:03:08.940 [220/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.940 [221/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:08.940 [222/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:08.940 [223/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:08.940 [224/265] Linking static target drivers/librte_mempool_ring.a 00:03:09.505 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.764 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:09.764 [227/265] Linking static target lib/librte_vhost.a 00:03:10.699 [228/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.699 [229/265] Linking target lib/librte_eal.so.24.0 00:03:10.957 [230/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:10.957 [231/265] Linking target drivers/librte_bus_vdev.so.24.0 00:03:10.957 [232/265] Linking target lib/librte_timer.so.24.0 00:03:10.957 [233/265] Linking target lib/librte_meter.so.24.0 00:03:10.957 [234/265] Linking target lib/librte_pci.so.24.0 00:03:10.957 [235/265] Linking target lib/librte_ring.so.24.0 00:03:10.957 [236/265] Linking target lib/librte_dmadev.so.24.0 00:03:10.957 [237/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:10.957 [238/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:10.957 [239/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:10.957 [240/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:10.957 [241/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:10.957 [242/265] Linking target lib/librte_mempool.so.24.0 00:03:10.957 [243/265] Linking target drivers/librte_bus_pci.so.24.0 00:03:10.957 [244/265] Linking target lib/librte_rcu.so.24.0 00:03:11.215 [245/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.215 [246/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:11.215 [247/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:11.215 [248/265] Linking target drivers/librte_mempool_ring.so.24.0 00:03:11.215 [249/265] Linking target lib/librte_mbuf.so.24.0 00:03:11.474 [250/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.474 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:11.474 [252/265] Linking target lib/librte_net.so.24.0 00:03:11.474 [253/265] Linking target lib/librte_compressdev.so.24.0 00:03:11.474 [254/265] Linking target lib/librte_cryptodev.so.24.0 00:03:11.474 [255/265] Linking target lib/librte_reorder.so.24.0 00:03:11.731 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:11.731 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:11.731 [258/265] Linking target lib/librte_hash.so.24.0 00:03:11.731 [259/265] Linking target lib/librte_security.so.24.0 00:03:11.731 [260/265] Linking target lib/librte_cmdline.so.24.0 00:03:11.731 [261/265] Linking target lib/librte_ethdev.so.24.0 00:03:11.731 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:11.990 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:11.990 [264/265] Linking target lib/librte_power.so.24.0 00:03:11.990 [265/265] Linking target lib/librte_vhost.so.24.0 00:03:11.990 INFO: autodetecting backend as ninja 00:03:11.990 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:13.365 CC lib/log/log.o 00:03:13.365 CC lib/log/log_flags.o 00:03:13.365 CC lib/log/log_deprecated.o 00:03:13.365 CC lib/ut_mock/mock.o 00:03:13.365 CC lib/ut/ut.o 00:03:13.365 LIB libspdk_ut_mock.a 00:03:13.365 LIB libspdk_log.a 00:03:13.365 LIB libspdk_ut.a 00:03:13.365 SO libspdk_ut_mock.so.5.0 00:03:13.365 SO libspdk_ut.so.1.0 00:03:13.365 SO libspdk_log.so.6.1 00:03:13.365 SYMLINK libspdk_ut_mock.so 00:03:13.365 SYMLINK libspdk_ut.so 00:03:13.365 SYMLINK libspdk_log.so 00:03:13.624 CC lib/ioat/ioat.o 00:03:13.624 CXX lib/trace_parser/trace.o 00:03:13.624 CC lib/dma/dma.o 00:03:13.624 CC lib/util/bit_array.o 00:03:13.624 CC lib/util/crc16.o 00:03:13.624 CC lib/util/crc32.o 00:03:13.624 CC lib/util/cpuset.o 00:03:13.624 CC lib/util/crc32c.o 00:03:13.624 CC lib/util/base64.o 00:03:13.624 CC lib/vfio_user/host/vfio_user_pci.o 00:03:13.624 CC lib/util/crc32_ieee.o 00:03:13.624 CC lib/util/crc64.o 00:03:13.624 CC lib/vfio_user/host/vfio_user.o 00:03:13.882 CC lib/util/dif.o 00:03:13.882 LIB libspdk_dma.a 00:03:13.882 CC lib/util/fd.o 00:03:13.882 LIB libspdk_ioat.a 00:03:13.882 SO libspdk_dma.so.3.0 00:03:13.882 CC lib/util/file.o 00:03:13.882 CC lib/util/hexlify.o 00:03:13.882 SO libspdk_ioat.so.6.0 00:03:13.882 CC lib/util/iov.o 00:03:13.882 SYMLINK libspdk_dma.so 00:03:13.882 CC lib/util/math.o 00:03:13.882 CC lib/util/pipe.o 00:03:13.882 SYMLINK libspdk_ioat.so 00:03:13.882 CC lib/util/strerror_tls.o 00:03:13.882 CC lib/util/string.o 00:03:13.882 LIB libspdk_vfio_user.a 00:03:13.882 CC lib/util/uuid.o 00:03:13.882 SO libspdk_vfio_user.so.4.0 00:03:14.141 CC lib/util/fd_group.o 00:03:14.141 SYMLINK libspdk_vfio_user.so 00:03:14.141 CC lib/util/xor.o 00:03:14.141 CC lib/util/zipf.o 00:03:14.400 LIB libspdk_util.a 00:03:14.400 SO libspdk_util.so.8.0 00:03:14.658 SYMLINK libspdk_util.so 00:03:14.658 LIB libspdk_trace_parser.a 00:03:14.658 SO libspdk_trace_parser.so.4.0 00:03:14.658 CC lib/json/json_parse.o 00:03:14.658 CC lib/json/json_util.o 00:03:14.658 CC lib/rdma/common.o 00:03:14.658 CC lib/json/json_write.o 00:03:14.658 CC lib/rdma/rdma_verbs.o 00:03:14.658 CC lib/vmd/vmd.o 00:03:14.658 CC lib/env_dpdk/env.o 00:03:14.658 CC lib/conf/conf.o 00:03:14.658 CC lib/idxd/idxd.o 00:03:14.658 SYMLINK libspdk_trace_parser.so 00:03:14.658 CC lib/idxd/idxd_user.o 00:03:14.916 CC lib/vmd/led.o 00:03:14.916 LIB libspdk_conf.a 00:03:14.916 CC lib/env_dpdk/memory.o 00:03:14.916 CC lib/env_dpdk/pci.o 00:03:14.916 SO libspdk_conf.so.5.0 00:03:14.916 LIB libspdk_rdma.a 00:03:14.916 SO libspdk_rdma.so.5.0 00:03:14.916 LIB libspdk_json.a 00:03:14.916 CC lib/idxd/idxd_kernel.o 00:03:14.916 SYMLINK libspdk_conf.so 00:03:14.916 CC lib/env_dpdk/init.o 00:03:14.916 SO libspdk_json.so.5.1 00:03:14.916 SYMLINK libspdk_rdma.so 00:03:14.916 CC lib/env_dpdk/threads.o 00:03:14.916 CC lib/env_dpdk/pci_ioat.o 00:03:14.916 SYMLINK libspdk_json.so 00:03:14.916 CC lib/env_dpdk/pci_virtio.o 00:03:15.174 CC lib/env_dpdk/pci_vmd.o 00:03:15.174 CC lib/env_dpdk/pci_idxd.o 00:03:15.174 CC lib/env_dpdk/pci_event.o 00:03:15.174 CC lib/env_dpdk/sigbus_handler.o 00:03:15.174 LIB libspdk_idxd.a 00:03:15.174 SO libspdk_idxd.so.11.0 00:03:15.174 CC lib/env_dpdk/pci_dpdk.o 00:03:15.174 SYMLINK libspdk_idxd.so 00:03:15.174 LIB libspdk_vmd.a 00:03:15.174 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:15.174 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:15.433 SO libspdk_vmd.so.5.0 00:03:15.433 CC lib/jsonrpc/jsonrpc_server.o 00:03:15.433 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:15.433 CC lib/jsonrpc/jsonrpc_client.o 00:03:15.433 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:15.433 SYMLINK libspdk_vmd.so 00:03:15.693 LIB libspdk_jsonrpc.a 00:03:15.693 SO libspdk_jsonrpc.so.5.1 00:03:15.693 SYMLINK libspdk_jsonrpc.so 00:03:15.951 CC lib/rpc/rpc.o 00:03:15.951 LIB libspdk_env_dpdk.a 00:03:16.209 LIB libspdk_rpc.a 00:03:16.209 SO libspdk_env_dpdk.so.13.0 00:03:16.209 SO libspdk_rpc.so.5.0 00:03:16.209 SYMLINK libspdk_rpc.so 00:03:16.209 SYMLINK libspdk_env_dpdk.so 00:03:16.209 CC lib/sock/sock.o 00:03:16.209 CC lib/sock/sock_rpc.o 00:03:16.209 CC lib/notify/notify_rpc.o 00:03:16.209 CC lib/notify/notify.o 00:03:16.210 CC lib/trace/trace.o 00:03:16.210 CC lib/trace/trace_flags.o 00:03:16.210 CC lib/trace/trace_rpc.o 00:03:16.468 LIB libspdk_notify.a 00:03:16.468 SO libspdk_notify.so.5.0 00:03:16.468 LIB libspdk_trace.a 00:03:16.468 SO libspdk_trace.so.9.0 00:03:16.726 SYMLINK libspdk_notify.so 00:03:16.726 SYMLINK libspdk_trace.so 00:03:16.726 LIB libspdk_sock.a 00:03:16.726 SO libspdk_sock.so.8.0 00:03:16.726 CC lib/thread/thread.o 00:03:16.726 CC lib/thread/iobuf.o 00:03:16.985 SYMLINK libspdk_sock.so 00:03:16.985 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:16.985 CC lib/nvme/nvme_ctrlr.o 00:03:16.985 CC lib/nvme/nvme_fabric.o 00:03:16.985 CC lib/nvme/nvme_ns.o 00:03:16.985 CC lib/nvme/nvme_ns_cmd.o 00:03:16.985 CC lib/nvme/nvme_pcie.o 00:03:16.985 CC lib/nvme/nvme_qpair.o 00:03:16.985 CC lib/nvme/nvme_pcie_common.o 00:03:17.243 CC lib/nvme/nvme.o 00:03:17.809 CC lib/nvme/nvme_quirks.o 00:03:17.809 CC lib/nvme/nvme_transport.o 00:03:17.809 CC lib/nvme/nvme_discovery.o 00:03:17.809 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:17.809 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:17.809 CC lib/nvme/nvme_tcp.o 00:03:18.068 CC lib/nvme/nvme_opal.o 00:03:18.068 CC lib/nvme/nvme_io_msg.o 00:03:18.327 LIB libspdk_thread.a 00:03:18.327 SO libspdk_thread.so.9.0 00:03:18.327 CC lib/nvme/nvme_poll_group.o 00:03:18.327 SYMLINK libspdk_thread.so 00:03:18.327 CC lib/nvme/nvme_zns.o 00:03:18.585 CC lib/nvme/nvme_cuse.o 00:03:18.585 CC lib/nvme/nvme_vfio_user.o 00:03:18.585 CC lib/nvme/nvme_rdma.o 00:03:18.585 CC lib/accel/accel.o 00:03:18.844 CC lib/blob/blobstore.o 00:03:18.844 CC lib/blob/request.o 00:03:19.103 CC lib/blob/zeroes.o 00:03:19.103 CC lib/blob/blob_bs_dev.o 00:03:19.103 CC lib/accel/accel_rpc.o 00:03:19.103 CC lib/accel/accel_sw.o 00:03:19.362 CC lib/virtio/virtio.o 00:03:19.362 CC lib/virtio/virtio_vhost_user.o 00:03:19.362 CC lib/init/json_config.o 00:03:19.362 CC lib/vfu_tgt/tgt_endpoint.o 00:03:19.362 CC lib/vfu_tgt/tgt_rpc.o 00:03:19.362 CC lib/init/subsystem.o 00:03:19.620 CC lib/init/subsystem_rpc.o 00:03:19.620 CC lib/init/rpc.o 00:03:19.620 CC lib/virtio/virtio_vfio_user.o 00:03:19.620 CC lib/virtio/virtio_pci.o 00:03:19.620 LIB libspdk_vfu_tgt.a 00:03:19.620 LIB libspdk_init.a 00:03:19.620 LIB libspdk_accel.a 00:03:19.620 SO libspdk_vfu_tgt.so.2.0 00:03:19.620 SO libspdk_accel.so.14.0 00:03:19.620 SO libspdk_init.so.4.0 00:03:19.879 SYMLINK libspdk_vfu_tgt.so 00:03:19.879 SYMLINK libspdk_init.so 00:03:19.879 SYMLINK libspdk_accel.so 00:03:19.879 LIB libspdk_nvme.a 00:03:19.879 LIB libspdk_virtio.a 00:03:19.879 CC lib/bdev/bdev.o 00:03:19.879 CC lib/bdev/bdev_rpc.o 00:03:19.879 CC lib/bdev/part.o 00:03:19.879 CC lib/bdev/bdev_zone.o 00:03:19.879 CC lib/bdev/scsi_nvme.o 00:03:19.879 CC lib/event/app.o 00:03:19.879 CC lib/event/reactor.o 00:03:19.879 SO libspdk_virtio.so.6.0 00:03:20.138 SYMLINK libspdk_virtio.so 00:03:20.138 SO libspdk_nvme.so.12.0 00:03:20.138 CC lib/event/log_rpc.o 00:03:20.138 CC lib/event/app_rpc.o 00:03:20.138 CC lib/event/scheduler_static.o 00:03:20.397 SYMLINK libspdk_nvme.so 00:03:20.397 LIB libspdk_event.a 00:03:20.397 SO libspdk_event.so.12.0 00:03:20.656 SYMLINK libspdk_event.so 00:03:21.592 LIB libspdk_blob.a 00:03:21.592 SO libspdk_blob.so.10.1 00:03:21.592 SYMLINK libspdk_blob.so 00:03:21.592 CC lib/lvol/lvol.o 00:03:21.592 CC lib/blobfs/blobfs.o 00:03:21.592 CC lib/blobfs/tree.o 00:03:22.159 LIB libspdk_bdev.a 00:03:22.418 SO libspdk_bdev.so.14.0 00:03:22.418 SYMLINK libspdk_bdev.so 00:03:22.418 LIB libspdk_lvol.a 00:03:22.418 SO libspdk_lvol.so.9.1 00:03:22.418 LIB libspdk_blobfs.a 00:03:22.722 CC lib/ublk/ublk.o 00:03:22.722 CC lib/ublk/ublk_rpc.o 00:03:22.722 CC lib/scsi/dev.o 00:03:22.722 CC lib/nbd/nbd_rpc.o 00:03:22.722 CC lib/nbd/nbd.o 00:03:22.722 CC lib/scsi/lun.o 00:03:22.722 CC lib/ftl/ftl_core.o 00:03:22.722 CC lib/nvmf/ctrlr.o 00:03:22.722 SO libspdk_blobfs.so.9.0 00:03:22.722 SYMLINK libspdk_lvol.so 00:03:22.722 CC lib/nvmf/ctrlr_discovery.o 00:03:22.722 SYMLINK libspdk_blobfs.so 00:03:22.722 CC lib/nvmf/ctrlr_bdev.o 00:03:22.722 CC lib/nvmf/subsystem.o 00:03:22.722 CC lib/nvmf/nvmf.o 00:03:22.722 CC lib/scsi/port.o 00:03:22.981 CC lib/scsi/scsi.o 00:03:22.981 CC lib/ftl/ftl_init.o 00:03:22.981 LIB libspdk_nbd.a 00:03:22.981 CC lib/ftl/ftl_layout.o 00:03:22.981 SO libspdk_nbd.so.6.0 00:03:22.981 CC lib/scsi/scsi_bdev.o 00:03:22.981 SYMLINK libspdk_nbd.so 00:03:22.981 CC lib/scsi/scsi_pr.o 00:03:22.981 CC lib/nvmf/nvmf_rpc.o 00:03:23.239 CC lib/nvmf/transport.o 00:03:23.239 LIB libspdk_ublk.a 00:03:23.239 SO libspdk_ublk.so.2.0 00:03:23.239 CC lib/nvmf/tcp.o 00:03:23.239 SYMLINK libspdk_ublk.so 00:03:23.239 CC lib/nvmf/vfio_user.o 00:03:23.239 CC lib/ftl/ftl_debug.o 00:03:23.498 CC lib/ftl/ftl_io.o 00:03:23.498 CC lib/scsi/scsi_rpc.o 00:03:23.498 CC lib/scsi/task.o 00:03:23.756 CC lib/nvmf/rdma.o 00:03:23.756 CC lib/ftl/ftl_sb.o 00:03:23.756 CC lib/ftl/ftl_l2p.o 00:03:23.756 LIB libspdk_scsi.a 00:03:23.756 CC lib/ftl/ftl_l2p_flat.o 00:03:23.756 SO libspdk_scsi.so.8.0 00:03:23.756 CC lib/ftl/ftl_nv_cache.o 00:03:23.756 CC lib/ftl/ftl_band.o 00:03:23.756 CC lib/ftl/ftl_band_ops.o 00:03:23.756 CC lib/ftl/ftl_writer.o 00:03:24.015 SYMLINK libspdk_scsi.so 00:03:24.015 CC lib/iscsi/conn.o 00:03:24.015 CC lib/iscsi/init_grp.o 00:03:24.015 CC lib/iscsi/iscsi.o 00:03:24.274 CC lib/ftl/ftl_rq.o 00:03:24.274 CC lib/ftl/ftl_reloc.o 00:03:24.274 CC lib/ftl/ftl_l2p_cache.o 00:03:24.274 CC lib/ftl/ftl_p2l.o 00:03:24.532 CC lib/vhost/vhost.o 00:03:24.532 CC lib/vhost/vhost_rpc.o 00:03:24.532 CC lib/iscsi/md5.o 00:03:24.532 CC lib/ftl/mngt/ftl_mngt.o 00:03:24.791 CC lib/iscsi/param.o 00:03:24.791 CC lib/vhost/vhost_scsi.o 00:03:24.791 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:24.791 CC lib/vhost/vhost_blk.o 00:03:24.791 CC lib/vhost/rte_vhost_user.o 00:03:25.050 CC lib/iscsi/portal_grp.o 00:03:25.050 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:25.050 CC lib/iscsi/tgt_node.o 00:03:25.050 CC lib/iscsi/iscsi_subsystem.o 00:03:25.309 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:25.309 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:25.309 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:25.309 CC lib/iscsi/iscsi_rpc.o 00:03:25.567 CC lib/iscsi/task.o 00:03:25.567 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:25.567 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:25.567 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:25.567 LIB libspdk_nvmf.a 00:03:25.567 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:25.567 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:25.567 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:25.826 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:25.826 SO libspdk_nvmf.so.17.0 00:03:25.826 LIB libspdk_iscsi.a 00:03:25.826 CC lib/ftl/utils/ftl_conf.o 00:03:25.826 CC lib/ftl/utils/ftl_md.o 00:03:25.826 CC lib/ftl/utils/ftl_mempool.o 00:03:25.826 CC lib/ftl/utils/ftl_bitmap.o 00:03:25.826 SO libspdk_iscsi.so.7.0 00:03:25.826 CC lib/ftl/utils/ftl_property.o 00:03:25.826 LIB libspdk_vhost.a 00:03:26.085 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:26.085 SYMLINK libspdk_nvmf.so 00:03:26.085 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:26.085 SO libspdk_vhost.so.7.1 00:03:26.085 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:26.085 SYMLINK libspdk_iscsi.so 00:03:26.085 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:26.085 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:26.085 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:26.085 SYMLINK libspdk_vhost.so 00:03:26.085 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:26.085 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:26.085 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:26.085 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:26.343 CC lib/ftl/base/ftl_base_dev.o 00:03:26.343 CC lib/ftl/base/ftl_base_bdev.o 00:03:26.343 CC lib/ftl/ftl_trace.o 00:03:26.601 LIB libspdk_ftl.a 00:03:26.601 SO libspdk_ftl.so.8.0 00:03:26.860 SYMLINK libspdk_ftl.so 00:03:27.118 CC module/env_dpdk/env_dpdk_rpc.o 00:03:27.118 CC module/vfu_device/vfu_virtio.o 00:03:27.118 CC module/sock/posix/posix.o 00:03:27.118 CC module/accel/error/accel_error.o 00:03:27.118 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:27.118 CC module/blob/bdev/blob_bdev.o 00:03:27.118 CC module/accel/ioat/accel_ioat.o 00:03:27.118 CC module/accel/dsa/accel_dsa.o 00:03:27.118 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:27.119 CC module/accel/iaa/accel_iaa.o 00:03:27.119 LIB libspdk_env_dpdk_rpc.a 00:03:27.377 SO libspdk_env_dpdk_rpc.so.5.0 00:03:27.377 LIB libspdk_scheduler_dpdk_governor.a 00:03:27.377 SYMLINK libspdk_env_dpdk_rpc.so 00:03:27.377 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:27.377 CC module/accel/dsa/accel_dsa_rpc.o 00:03:27.377 CC module/accel/error/accel_error_rpc.o 00:03:27.377 CC module/accel/ioat/accel_ioat_rpc.o 00:03:27.377 LIB libspdk_scheduler_dynamic.a 00:03:27.377 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:27.377 CC module/accel/iaa/accel_iaa_rpc.o 00:03:27.377 SO libspdk_scheduler_dynamic.so.3.0 00:03:27.377 CC module/vfu_device/vfu_virtio_blk.o 00:03:27.377 LIB libspdk_blob_bdev.a 00:03:27.377 SYMLINK libspdk_scheduler_dynamic.so 00:03:27.377 SO libspdk_blob_bdev.so.10.1 00:03:27.377 LIB libspdk_accel_dsa.a 00:03:27.377 CC module/vfu_device/vfu_virtio_scsi.o 00:03:27.377 LIB libspdk_accel_ioat.a 00:03:27.377 LIB libspdk_accel_error.a 00:03:27.377 SO libspdk_accel_dsa.so.4.0 00:03:27.377 CC module/scheduler/gscheduler/gscheduler.o 00:03:27.636 SYMLINK libspdk_blob_bdev.so 00:03:27.636 LIB libspdk_accel_iaa.a 00:03:27.636 CC module/vfu_device/vfu_virtio_rpc.o 00:03:27.636 SO libspdk_accel_ioat.so.5.0 00:03:27.636 SO libspdk_accel_error.so.1.0 00:03:27.636 SO libspdk_accel_iaa.so.2.0 00:03:27.636 SYMLINK libspdk_accel_dsa.so 00:03:27.636 SYMLINK libspdk_accel_iaa.so 00:03:27.636 SYMLINK libspdk_accel_error.so 00:03:27.636 SYMLINK libspdk_accel_ioat.so 00:03:27.636 LIB libspdk_scheduler_gscheduler.a 00:03:27.636 SO libspdk_scheduler_gscheduler.so.3.0 00:03:27.636 CC module/bdev/error/vbdev_error.o 00:03:27.636 CC module/bdev/gpt/gpt.o 00:03:27.636 CC module/bdev/delay/vbdev_delay.o 00:03:27.636 SYMLINK libspdk_scheduler_gscheduler.so 00:03:27.895 CC module/blobfs/bdev/blobfs_bdev.o 00:03:27.895 CC module/bdev/malloc/bdev_malloc.o 00:03:27.895 CC module/bdev/null/bdev_null.o 00:03:27.895 CC module/bdev/lvol/vbdev_lvol.o 00:03:27.895 LIB libspdk_vfu_device.a 00:03:27.895 LIB libspdk_sock_posix.a 00:03:27.895 SO libspdk_sock_posix.so.5.0 00:03:27.895 SO libspdk_vfu_device.so.2.0 00:03:27.895 CC module/bdev/nvme/bdev_nvme.o 00:03:27.895 CC module/bdev/gpt/vbdev_gpt.o 00:03:27.895 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:27.895 SYMLINK libspdk_sock_posix.so 00:03:27.895 SYMLINK libspdk_vfu_device.so 00:03:27.895 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:27.895 CC module/bdev/error/vbdev_error_rpc.o 00:03:28.154 CC module/bdev/null/bdev_null_rpc.o 00:03:28.154 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:28.154 CC module/bdev/passthru/vbdev_passthru.o 00:03:28.154 LIB libspdk_bdev_delay.a 00:03:28.154 LIB libspdk_bdev_error.a 00:03:28.154 LIB libspdk_blobfs_bdev.a 00:03:28.154 SO libspdk_bdev_error.so.5.0 00:03:28.154 SO libspdk_bdev_delay.so.5.0 00:03:28.154 SO libspdk_blobfs_bdev.so.5.0 00:03:28.154 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:28.154 LIB libspdk_bdev_gpt.a 00:03:28.154 LIB libspdk_bdev_null.a 00:03:28.154 SYMLINK libspdk_bdev_delay.so 00:03:28.413 SO libspdk_bdev_gpt.so.5.0 00:03:28.413 SYMLINK libspdk_bdev_error.so 00:03:28.413 SYMLINK libspdk_blobfs_bdev.so 00:03:28.413 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:28.413 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:28.413 SO libspdk_bdev_null.so.5.0 00:03:28.413 LIB libspdk_bdev_malloc.a 00:03:28.413 SYMLINK libspdk_bdev_gpt.so 00:03:28.413 SYMLINK libspdk_bdev_null.so 00:03:28.413 SO libspdk_bdev_malloc.so.5.0 00:03:28.413 CC module/bdev/raid/bdev_raid.o 00:03:28.413 SYMLINK libspdk_bdev_malloc.so 00:03:28.413 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:28.413 CC module/bdev/aio/bdev_aio.o 00:03:28.413 CC module/bdev/split/vbdev_split.o 00:03:28.413 LIB libspdk_bdev_passthru.a 00:03:28.413 SO libspdk_bdev_passthru.so.5.0 00:03:28.672 CC module/bdev/ftl/bdev_ftl.o 00:03:28.672 LIB libspdk_bdev_lvol.a 00:03:28.672 CC module/bdev/iscsi/bdev_iscsi.o 00:03:28.672 SYMLINK libspdk_bdev_passthru.so 00:03:28.672 SO libspdk_bdev_lvol.so.5.0 00:03:28.672 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:28.672 SYMLINK libspdk_bdev_lvol.so 00:03:28.672 CC module/bdev/split/vbdev_split_rpc.o 00:03:28.672 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:28.672 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:28.672 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:28.931 CC module/bdev/aio/bdev_aio_rpc.o 00:03:28.931 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:28.931 CC module/bdev/nvme/nvme_rpc.o 00:03:28.931 LIB libspdk_bdev_split.a 00:03:28.931 LIB libspdk_bdev_zone_block.a 00:03:28.931 SO libspdk_bdev_split.so.5.0 00:03:28.931 LIB libspdk_bdev_iscsi.a 00:03:28.931 SO libspdk_bdev_zone_block.so.5.0 00:03:28.931 SO libspdk_bdev_iscsi.so.5.0 00:03:28.931 SYMLINK libspdk_bdev_split.so 00:03:28.931 LIB libspdk_bdev_aio.a 00:03:28.931 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:28.931 SYMLINK libspdk_bdev_zone_block.so 00:03:28.931 CC module/bdev/nvme/bdev_mdns_client.o 00:03:28.931 SO libspdk_bdev_aio.so.5.0 00:03:28.931 SYMLINK libspdk_bdev_iscsi.so 00:03:28.931 CC module/bdev/raid/bdev_raid_rpc.o 00:03:28.931 CC module/bdev/nvme/vbdev_opal.o 00:03:29.199 SYMLINK libspdk_bdev_aio.so 00:03:29.199 LIB libspdk_bdev_ftl.a 00:03:29.199 CC module/bdev/raid/bdev_raid_sb.o 00:03:29.199 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:29.199 SO libspdk_bdev_ftl.so.5.0 00:03:29.199 SYMLINK libspdk_bdev_ftl.so 00:03:29.199 CC module/bdev/raid/raid0.o 00:03:29.199 CC module/bdev/raid/raid1.o 00:03:29.199 CC module/bdev/raid/concat.o 00:03:29.199 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:29.199 LIB libspdk_bdev_virtio.a 00:03:29.473 SO libspdk_bdev_virtio.so.5.0 00:03:29.473 SYMLINK libspdk_bdev_virtio.so 00:03:29.473 LIB libspdk_bdev_raid.a 00:03:29.473 SO libspdk_bdev_raid.so.5.0 00:03:29.473 SYMLINK libspdk_bdev_raid.so 00:03:30.041 LIB libspdk_bdev_nvme.a 00:03:30.041 SO libspdk_bdev_nvme.so.6.0 00:03:30.300 SYMLINK libspdk_bdev_nvme.so 00:03:30.558 CC module/event/subsystems/iobuf/iobuf.o 00:03:30.558 CC module/event/subsystems/vmd/vmd.o 00:03:30.558 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:30.558 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:30.558 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:30.558 CC module/event/subsystems/sock/sock.o 00:03:30.558 CC module/event/subsystems/scheduler/scheduler.o 00:03:30.558 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:30.558 LIB libspdk_event_vhost_blk.a 00:03:30.817 LIB libspdk_event_sock.a 00:03:30.817 LIB libspdk_event_vfu_tgt.a 00:03:30.817 LIB libspdk_event_vmd.a 00:03:30.817 LIB libspdk_event_iobuf.a 00:03:30.817 SO libspdk_event_vhost_blk.so.2.0 00:03:30.817 LIB libspdk_event_scheduler.a 00:03:30.817 SO libspdk_event_sock.so.4.0 00:03:30.817 SO libspdk_event_vfu_tgt.so.2.0 00:03:30.817 SO libspdk_event_scheduler.so.3.0 00:03:30.817 SO libspdk_event_iobuf.so.2.0 00:03:30.817 SO libspdk_event_vmd.so.5.0 00:03:30.817 SYMLINK libspdk_event_vhost_blk.so 00:03:30.817 SYMLINK libspdk_event_sock.so 00:03:30.817 SYMLINK libspdk_event_vfu_tgt.so 00:03:30.817 SYMLINK libspdk_event_scheduler.so 00:03:30.817 SYMLINK libspdk_event_vmd.so 00:03:30.817 SYMLINK libspdk_event_iobuf.so 00:03:31.076 CC module/event/subsystems/accel/accel.o 00:03:31.076 LIB libspdk_event_accel.a 00:03:31.076 SO libspdk_event_accel.so.5.0 00:03:31.334 SYMLINK libspdk_event_accel.so 00:03:31.334 CC module/event/subsystems/bdev/bdev.o 00:03:31.592 LIB libspdk_event_bdev.a 00:03:31.592 SO libspdk_event_bdev.so.5.0 00:03:31.851 SYMLINK libspdk_event_bdev.so 00:03:31.851 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:31.851 CC module/event/subsystems/nbd/nbd.o 00:03:31.851 CC module/event/subsystems/scsi/scsi.o 00:03:31.851 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:31.851 CC module/event/subsystems/ublk/ublk.o 00:03:32.109 LIB libspdk_event_nbd.a 00:03:32.109 LIB libspdk_event_ublk.a 00:03:32.109 LIB libspdk_event_scsi.a 00:03:32.109 SO libspdk_event_nbd.so.5.0 00:03:32.109 SO libspdk_event_ublk.so.2.0 00:03:32.109 SO libspdk_event_scsi.so.5.0 00:03:32.109 SYMLINK libspdk_event_nbd.so 00:03:32.109 SYMLINK libspdk_event_ublk.so 00:03:32.109 LIB libspdk_event_nvmf.a 00:03:32.109 SYMLINK libspdk_event_scsi.so 00:03:32.109 SO libspdk_event_nvmf.so.5.0 00:03:32.367 SYMLINK libspdk_event_nvmf.so 00:03:32.367 CC module/event/subsystems/iscsi/iscsi.o 00:03:32.367 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:32.626 LIB libspdk_event_vhost_scsi.a 00:03:32.626 LIB libspdk_event_iscsi.a 00:03:32.626 SO libspdk_event_vhost_scsi.so.2.0 00:03:32.626 SO libspdk_event_iscsi.so.5.0 00:03:32.626 SYMLINK libspdk_event_vhost_scsi.so 00:03:32.626 SYMLINK libspdk_event_iscsi.so 00:03:32.626 SO libspdk.so.5.0 00:03:32.885 SYMLINK libspdk.so 00:03:32.885 CXX app/trace/trace.o 00:03:32.885 TEST_HEADER include/spdk/accel.h 00:03:32.885 TEST_HEADER include/spdk/accel_module.h 00:03:32.885 TEST_HEADER include/spdk/assert.h 00:03:32.885 TEST_HEADER include/spdk/barrier.h 00:03:32.885 TEST_HEADER include/spdk/base64.h 00:03:32.885 TEST_HEADER include/spdk/bdev.h 00:03:32.885 TEST_HEADER include/spdk/bdev_module.h 00:03:32.885 TEST_HEADER include/spdk/bdev_zone.h 00:03:32.885 TEST_HEADER include/spdk/bit_array.h 00:03:32.885 TEST_HEADER include/spdk/bit_pool.h 00:03:32.885 TEST_HEADER include/spdk/blob_bdev.h 00:03:32.885 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:32.885 TEST_HEADER include/spdk/blobfs.h 00:03:32.885 TEST_HEADER include/spdk/blob.h 00:03:32.885 TEST_HEADER include/spdk/conf.h 00:03:32.885 TEST_HEADER include/spdk/config.h 00:03:32.885 TEST_HEADER include/spdk/cpuset.h 00:03:32.885 TEST_HEADER include/spdk/crc16.h 00:03:32.885 TEST_HEADER include/spdk/crc32.h 00:03:32.885 TEST_HEADER include/spdk/crc64.h 00:03:32.885 TEST_HEADER include/spdk/dif.h 00:03:32.885 TEST_HEADER include/spdk/dma.h 00:03:32.885 CC examples/accel/perf/accel_perf.o 00:03:32.885 TEST_HEADER include/spdk/endian.h 00:03:32.885 TEST_HEADER include/spdk/env_dpdk.h 00:03:32.885 TEST_HEADER include/spdk/env.h 00:03:32.885 CC test/event/event_perf/event_perf.o 00:03:32.885 TEST_HEADER include/spdk/event.h 00:03:32.885 TEST_HEADER include/spdk/fd_group.h 00:03:32.885 TEST_HEADER include/spdk/fd.h 00:03:32.885 TEST_HEADER include/spdk/file.h 00:03:32.885 TEST_HEADER include/spdk/ftl.h 00:03:32.885 TEST_HEADER include/spdk/gpt_spec.h 00:03:32.885 TEST_HEADER include/spdk/hexlify.h 00:03:32.885 TEST_HEADER include/spdk/histogram_data.h 00:03:32.885 TEST_HEADER include/spdk/idxd.h 00:03:33.143 TEST_HEADER include/spdk/idxd_spec.h 00:03:33.143 TEST_HEADER include/spdk/init.h 00:03:33.143 TEST_HEADER include/spdk/ioat.h 00:03:33.143 TEST_HEADER include/spdk/ioat_spec.h 00:03:33.143 TEST_HEADER include/spdk/iscsi_spec.h 00:03:33.143 CC test/accel/dif/dif.o 00:03:33.143 TEST_HEADER include/spdk/json.h 00:03:33.143 TEST_HEADER include/spdk/jsonrpc.h 00:03:33.143 TEST_HEADER include/spdk/likely.h 00:03:33.143 CC test/blobfs/mkfs/mkfs.o 00:03:33.143 CC test/dma/test_dma/test_dma.o 00:03:33.143 CC test/app/bdev_svc/bdev_svc.o 00:03:33.143 TEST_HEADER include/spdk/log.h 00:03:33.143 TEST_HEADER include/spdk/lvol.h 00:03:33.143 TEST_HEADER include/spdk/memory.h 00:03:33.143 TEST_HEADER include/spdk/mmio.h 00:03:33.143 TEST_HEADER include/spdk/nbd.h 00:03:33.143 CC test/bdev/bdevio/bdevio.o 00:03:33.143 TEST_HEADER include/spdk/notify.h 00:03:33.143 TEST_HEADER include/spdk/nvme.h 00:03:33.143 TEST_HEADER include/spdk/nvme_intel.h 00:03:33.143 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:33.143 CC test/env/mem_callbacks/mem_callbacks.o 00:03:33.143 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:33.143 TEST_HEADER include/spdk/nvme_spec.h 00:03:33.143 TEST_HEADER include/spdk/nvme_zns.h 00:03:33.143 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:33.143 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:33.143 TEST_HEADER include/spdk/nvmf.h 00:03:33.143 TEST_HEADER include/spdk/nvmf_spec.h 00:03:33.143 TEST_HEADER include/spdk/nvmf_transport.h 00:03:33.143 TEST_HEADER include/spdk/opal.h 00:03:33.143 TEST_HEADER include/spdk/opal_spec.h 00:03:33.143 TEST_HEADER include/spdk/pci_ids.h 00:03:33.143 TEST_HEADER include/spdk/pipe.h 00:03:33.143 TEST_HEADER include/spdk/queue.h 00:03:33.143 TEST_HEADER include/spdk/reduce.h 00:03:33.143 TEST_HEADER include/spdk/rpc.h 00:03:33.143 TEST_HEADER include/spdk/scheduler.h 00:03:33.143 TEST_HEADER include/spdk/scsi.h 00:03:33.143 TEST_HEADER include/spdk/scsi_spec.h 00:03:33.143 TEST_HEADER include/spdk/sock.h 00:03:33.143 TEST_HEADER include/spdk/stdinc.h 00:03:33.143 TEST_HEADER include/spdk/string.h 00:03:33.143 TEST_HEADER include/spdk/thread.h 00:03:33.143 TEST_HEADER include/spdk/trace.h 00:03:33.143 TEST_HEADER include/spdk/trace_parser.h 00:03:33.143 TEST_HEADER include/spdk/tree.h 00:03:33.143 TEST_HEADER include/spdk/ublk.h 00:03:33.143 TEST_HEADER include/spdk/util.h 00:03:33.143 TEST_HEADER include/spdk/uuid.h 00:03:33.143 TEST_HEADER include/spdk/version.h 00:03:33.143 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:33.143 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:33.143 TEST_HEADER include/spdk/vhost.h 00:03:33.143 TEST_HEADER include/spdk/vmd.h 00:03:33.143 TEST_HEADER include/spdk/xor.h 00:03:33.143 TEST_HEADER include/spdk/zipf.h 00:03:33.143 CXX test/cpp_headers/accel.o 00:03:33.143 LINK event_perf 00:03:33.143 LINK bdev_svc 00:03:33.402 LINK mkfs 00:03:33.402 CXX test/cpp_headers/accel_module.o 00:03:33.402 LINK spdk_trace 00:03:33.402 CC test/event/reactor/reactor.o 00:03:33.402 LINK dif 00:03:33.402 LINK accel_perf 00:03:33.402 LINK test_dma 00:03:33.402 CXX test/cpp_headers/assert.o 00:03:33.402 LINK bdevio 00:03:33.402 CXX test/cpp_headers/barrier.o 00:03:33.660 LINK reactor 00:03:33.660 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:33.660 CC app/trace_record/trace_record.o 00:03:33.660 LINK mem_callbacks 00:03:33.660 CXX test/cpp_headers/base64.o 00:03:33.660 CXX test/cpp_headers/bdev.o 00:03:33.660 CC app/nvmf_tgt/nvmf_main.o 00:03:33.660 CC app/iscsi_tgt/iscsi_tgt.o 00:03:33.660 CC examples/bdev/hello_world/hello_bdev.o 00:03:33.660 CC test/event/reactor_perf/reactor_perf.o 00:03:33.660 CC test/event/app_repeat/app_repeat.o 00:03:33.917 CC test/env/vtophys/vtophys.o 00:03:33.917 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:33.917 CXX test/cpp_headers/bdev_module.o 00:03:33.917 LINK spdk_trace_record 00:03:33.917 LINK reactor_perf 00:03:33.917 LINK nvmf_tgt 00:03:33.917 LINK app_repeat 00:03:33.917 LINK iscsi_tgt 00:03:33.917 LINK hello_bdev 00:03:33.917 LINK nvme_fuzz 00:03:33.917 LINK vtophys 00:03:33.917 LINK env_dpdk_post_init 00:03:33.917 CXX test/cpp_headers/bdev_zone.o 00:03:33.917 CXX test/cpp_headers/bit_array.o 00:03:34.175 CXX test/cpp_headers/bit_pool.o 00:03:34.175 CXX test/cpp_headers/blob_bdev.o 00:03:34.175 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:34.175 CC test/event/scheduler/scheduler.o 00:03:34.175 CC test/env/memory/memory_ut.o 00:03:34.175 CC examples/blob/hello_world/hello_blob.o 00:03:34.175 CC examples/bdev/bdevperf/bdevperf.o 00:03:34.175 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:34.175 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:34.175 CC test/app/histogram_perf/histogram_perf.o 00:03:34.175 CC app/spdk_tgt/spdk_tgt.o 00:03:34.432 CXX test/cpp_headers/blobfs_bdev.o 00:03:34.432 CXX test/cpp_headers/blobfs.o 00:03:34.432 LINK scheduler 00:03:34.432 LINK histogram_perf 00:03:34.432 LINK hello_blob 00:03:34.432 CXX test/cpp_headers/blob.o 00:03:34.432 LINK spdk_tgt 00:03:34.432 CXX test/cpp_headers/conf.o 00:03:34.687 CXX test/cpp_headers/config.o 00:03:34.687 CC examples/ioat/perf/perf.o 00:03:34.687 CC examples/ioat/verify/verify.o 00:03:34.687 LINK vhost_fuzz 00:03:34.687 CXX test/cpp_headers/cpuset.o 00:03:34.687 CC app/spdk_lspci/spdk_lspci.o 00:03:34.687 CC examples/blob/cli/blobcli.o 00:03:34.944 LINK verify 00:03:34.944 CXX test/cpp_headers/crc16.o 00:03:34.944 LINK ioat_perf 00:03:34.944 CC examples/nvme/hello_world/hello_world.o 00:03:34.944 LINK spdk_lspci 00:03:34.944 CC examples/sock/hello_world/hello_sock.o 00:03:34.944 LINK bdevperf 00:03:34.944 CXX test/cpp_headers/crc32.o 00:03:35.202 CC app/spdk_nvme_perf/perf.o 00:03:35.202 CC examples/vmd/lsvmd/lsvmd.o 00:03:35.202 LINK memory_ut 00:03:35.202 CC app/spdk_nvme_identify/identify.o 00:03:35.202 LINK hello_world 00:03:35.202 LINK hello_sock 00:03:35.202 CXX test/cpp_headers/crc64.o 00:03:35.202 CC app/spdk_nvme_discover/discovery_aer.o 00:03:35.202 LINK lsvmd 00:03:35.202 LINK blobcli 00:03:35.460 CC test/env/pci/pci_ut.o 00:03:35.460 CC examples/nvme/reconnect/reconnect.o 00:03:35.460 CXX test/cpp_headers/dif.o 00:03:35.460 CC app/spdk_top/spdk_top.o 00:03:35.460 LINK spdk_nvme_discover 00:03:35.460 CC examples/vmd/led/led.o 00:03:35.460 CXX test/cpp_headers/dma.o 00:03:35.460 CC app/vhost/vhost.o 00:03:35.718 LINK led 00:03:35.718 CC app/spdk_dd/spdk_dd.o 00:03:35.718 LINK reconnect 00:03:35.718 CXX test/cpp_headers/endian.o 00:03:35.718 LINK pci_ut 00:03:35.718 LINK vhost 00:03:35.718 LINK iscsi_fuzz 00:03:35.718 LINK spdk_nvme_identify 00:03:35.976 LINK spdk_nvme_perf 00:03:35.976 CXX test/cpp_headers/env_dpdk.o 00:03:35.976 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:35.976 CC examples/nvmf/nvmf/nvmf.o 00:03:35.976 LINK spdk_dd 00:03:35.976 CC examples/nvme/arbitration/arbitration.o 00:03:35.976 CXX test/cpp_headers/env.o 00:03:35.976 CC test/app/jsoncat/jsoncat.o 00:03:35.976 CC app/fio/nvme/fio_plugin.o 00:03:35.976 CC test/app/stub/stub.o 00:03:35.977 CC test/lvol/esnap/esnap.o 00:03:36.234 LINK spdk_top 00:03:36.234 LINK jsoncat 00:03:36.234 LINK nvmf 00:03:36.234 CXX test/cpp_headers/event.o 00:03:36.234 CXX test/cpp_headers/fd_group.o 00:03:36.234 LINK stub 00:03:36.234 CXX test/cpp_headers/fd.o 00:03:36.235 CXX test/cpp_headers/file.o 00:03:36.235 LINK arbitration 00:03:36.494 LINK nvme_manage 00:03:36.494 CXX test/cpp_headers/ftl.o 00:03:36.494 CXX test/cpp_headers/gpt_spec.o 00:03:36.494 CXX test/cpp_headers/hexlify.o 00:03:36.494 CXX test/cpp_headers/histogram_data.o 00:03:36.494 CXX test/cpp_headers/idxd.o 00:03:36.494 CC examples/util/zipf/zipf.o 00:03:36.494 CXX test/cpp_headers/idxd_spec.o 00:03:36.494 LINK spdk_nvme 00:03:36.494 CC examples/nvme/hotplug/hotplug.o 00:03:36.494 CXX test/cpp_headers/init.o 00:03:36.494 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:36.494 CC examples/nvme/abort/abort.o 00:03:36.752 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:36.752 CXX test/cpp_headers/ioat.o 00:03:36.752 LINK zipf 00:03:36.752 CXX test/cpp_headers/ioat_spec.o 00:03:36.752 CC app/fio/bdev/fio_plugin.o 00:03:36.752 LINK cmb_copy 00:03:36.752 LINK pmr_persistence 00:03:36.752 LINK hotplug 00:03:36.752 CXX test/cpp_headers/iscsi_spec.o 00:03:37.010 CC examples/thread/thread/thread_ex.o 00:03:37.010 CC examples/idxd/perf/perf.o 00:03:37.010 LINK abort 00:03:37.010 CXX test/cpp_headers/json.o 00:03:37.010 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:37.010 CC test/rpc_client/rpc_client_test.o 00:03:37.010 CC test/nvme/aer/aer.o 00:03:37.269 LINK thread 00:03:37.269 CXX test/cpp_headers/jsonrpc.o 00:03:37.269 CC test/nvme/reset/reset.o 00:03:37.269 LINK interrupt_tgt 00:03:37.269 LINK rpc_client_test 00:03:37.269 LINK spdk_bdev 00:03:37.269 LINK idxd_perf 00:03:37.269 CXX test/cpp_headers/likely.o 00:03:37.269 CXX test/cpp_headers/log.o 00:03:37.269 LINK aer 00:03:37.269 CXX test/cpp_headers/lvol.o 00:03:37.269 CXX test/cpp_headers/memory.o 00:03:37.527 LINK reset 00:03:37.527 CC test/thread/poller_perf/poller_perf.o 00:03:37.527 CC test/nvme/sgl/sgl.o 00:03:37.528 CXX test/cpp_headers/mmio.o 00:03:37.528 CC test/nvme/e2edp/nvme_dp.o 00:03:37.528 CC test/nvme/overhead/overhead.o 00:03:37.528 CC test/nvme/err_injection/err_injection.o 00:03:37.528 CC test/nvme/startup/startup.o 00:03:37.528 LINK poller_perf 00:03:37.528 CC test/nvme/reserve/reserve.o 00:03:37.786 CXX test/cpp_headers/nbd.o 00:03:37.786 CXX test/cpp_headers/notify.o 00:03:37.786 LINK sgl 00:03:37.786 CXX test/cpp_headers/nvme.o 00:03:37.786 LINK startup 00:03:37.786 LINK err_injection 00:03:37.786 LINK overhead 00:03:37.786 LINK nvme_dp 00:03:37.786 LINK reserve 00:03:37.786 CXX test/cpp_headers/nvme_intel.o 00:03:38.044 CXX test/cpp_headers/nvme_ocssd.o 00:03:38.044 CC test/nvme/simple_copy/simple_copy.o 00:03:38.044 CC test/nvme/boot_partition/boot_partition.o 00:03:38.044 CC test/nvme/connect_stress/connect_stress.o 00:03:38.044 CC test/nvme/compliance/nvme_compliance.o 00:03:38.044 CC test/nvme/fused_ordering/fused_ordering.o 00:03:38.302 LINK boot_partition 00:03:38.302 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:38.302 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:38.302 CC test/nvme/fdp/fdp.o 00:03:38.302 LINK connect_stress 00:03:38.302 LINK simple_copy 00:03:38.302 LINK doorbell_aers 00:03:38.302 CXX test/cpp_headers/nvme_spec.o 00:03:38.302 CXX test/cpp_headers/nvme_zns.o 00:03:38.302 LINK nvme_compliance 00:03:38.560 CC test/nvme/cuse/cuse.o 00:03:38.560 CXX test/cpp_headers/nvmf_cmd.o 00:03:38.560 LINK fused_ordering 00:03:38.560 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:38.560 LINK fdp 00:03:38.560 CXX test/cpp_headers/nvmf.o 00:03:38.560 CXX test/cpp_headers/nvmf_spec.o 00:03:38.560 CXX test/cpp_headers/nvmf_transport.o 00:03:38.560 CXX test/cpp_headers/opal.o 00:03:38.560 CXX test/cpp_headers/opal_spec.o 00:03:38.560 CXX test/cpp_headers/pci_ids.o 00:03:38.818 CXX test/cpp_headers/pipe.o 00:03:38.818 CXX test/cpp_headers/queue.o 00:03:38.818 CXX test/cpp_headers/reduce.o 00:03:38.818 CXX test/cpp_headers/rpc.o 00:03:38.818 CXX test/cpp_headers/scheduler.o 00:03:38.818 CXX test/cpp_headers/scsi.o 00:03:38.818 CXX test/cpp_headers/scsi_spec.o 00:03:38.818 CXX test/cpp_headers/sock.o 00:03:38.818 CXX test/cpp_headers/stdinc.o 00:03:38.818 CXX test/cpp_headers/string.o 00:03:38.818 CXX test/cpp_headers/thread.o 00:03:39.076 CXX test/cpp_headers/trace.o 00:03:39.076 CXX test/cpp_headers/trace_parser.o 00:03:39.076 CXX test/cpp_headers/tree.o 00:03:39.076 CXX test/cpp_headers/ublk.o 00:03:39.076 CXX test/cpp_headers/util.o 00:03:39.076 CXX test/cpp_headers/uuid.o 00:03:39.076 CXX test/cpp_headers/version.o 00:03:39.076 CXX test/cpp_headers/vfio_user_pci.o 00:03:39.076 CXX test/cpp_headers/vfio_user_spec.o 00:03:39.076 CXX test/cpp_headers/vhost.o 00:03:39.076 CXX test/cpp_headers/vmd.o 00:03:39.076 CXX test/cpp_headers/xor.o 00:03:39.076 CXX test/cpp_headers/zipf.o 00:03:39.642 LINK cuse 00:03:40.578 LINK esnap 00:03:43.880 ************************************ 00:03:43.880 END TEST make 00:03:43.880 ************************************ 00:03:43.880 00:03:43.880 real 1m2.083s 00:03:43.880 user 6m31.887s 00:03:43.880 sys 1m34.851s 00:03:43.880 03:47:18 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:43.880 03:47:18 -- common/autotest_common.sh@10 -- $ set +x 00:03:43.880 03:47:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:43.880 03:47:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:43.880 03:47:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:43.880 03:47:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:43.880 03:47:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:43.880 03:47:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:43.880 03:47:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:43.880 03:47:18 -- scripts/common.sh@335 -- # IFS=.-: 00:03:43.880 03:47:18 -- scripts/common.sh@335 -- # read -ra ver1 00:03:43.880 03:47:18 -- scripts/common.sh@336 -- # IFS=.-: 00:03:43.880 03:47:18 -- scripts/common.sh@336 -- # read -ra ver2 00:03:43.880 03:47:18 -- scripts/common.sh@337 -- # local 'op=<' 00:03:43.880 03:47:18 -- scripts/common.sh@339 -- # ver1_l=2 00:03:43.880 03:47:18 -- scripts/common.sh@340 -- # ver2_l=1 00:03:43.880 03:47:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:43.880 03:47:18 -- scripts/common.sh@343 -- # case "$op" in 00:03:43.880 03:47:18 -- scripts/common.sh@344 -- # : 1 00:03:43.880 03:47:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:43.880 03:47:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:43.880 03:47:18 -- scripts/common.sh@364 -- # decimal 1 00:03:43.880 03:47:18 -- scripts/common.sh@352 -- # local d=1 00:03:43.880 03:47:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:43.880 03:47:18 -- scripts/common.sh@354 -- # echo 1 00:03:43.880 03:47:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:43.880 03:47:18 -- scripts/common.sh@365 -- # decimal 2 00:03:43.880 03:47:18 -- scripts/common.sh@352 -- # local d=2 00:03:43.880 03:47:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:43.880 03:47:18 -- scripts/common.sh@354 -- # echo 2 00:03:43.880 03:47:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:43.880 03:47:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:43.880 03:47:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:43.880 03:47:18 -- scripts/common.sh@367 -- # return 0 00:03:43.880 03:47:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:43.880 03:47:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:43.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.880 --rc genhtml_branch_coverage=1 00:03:43.880 --rc genhtml_function_coverage=1 00:03:43.880 --rc genhtml_legend=1 00:03:43.880 --rc geninfo_all_blocks=1 00:03:43.880 --rc geninfo_unexecuted_blocks=1 00:03:43.880 00:03:43.880 ' 00:03:43.880 03:47:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:43.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.880 --rc genhtml_branch_coverage=1 00:03:43.880 --rc genhtml_function_coverage=1 00:03:43.880 --rc genhtml_legend=1 00:03:43.880 --rc geninfo_all_blocks=1 00:03:43.880 --rc geninfo_unexecuted_blocks=1 00:03:43.880 00:03:43.880 ' 00:03:43.880 03:47:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:43.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.880 --rc genhtml_branch_coverage=1 00:03:43.880 --rc genhtml_function_coverage=1 00:03:43.880 --rc genhtml_legend=1 00:03:43.880 --rc geninfo_all_blocks=1 00:03:43.880 --rc geninfo_unexecuted_blocks=1 00:03:43.880 00:03:43.880 ' 00:03:43.880 03:47:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:43.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.880 --rc genhtml_branch_coverage=1 00:03:43.880 --rc genhtml_function_coverage=1 00:03:43.880 --rc genhtml_legend=1 00:03:43.880 --rc geninfo_all_blocks=1 00:03:43.880 --rc geninfo_unexecuted_blocks=1 00:03:43.880 00:03:43.880 ' 00:03:43.880 03:47:18 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:43.880 03:47:18 -- nvmf/common.sh@7 -- # uname -s 00:03:43.881 03:47:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:43.881 03:47:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:43.881 03:47:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:43.881 03:47:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:43.881 03:47:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:43.881 03:47:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:43.881 03:47:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:43.881 03:47:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:43.881 03:47:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:43.881 03:47:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:43.881 03:47:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:03:43.881 03:47:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:03:43.881 03:47:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:43.881 03:47:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:43.881 03:47:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:43.881 03:47:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:43.881 03:47:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:43.881 03:47:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:43.881 03:47:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:43.881 03:47:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.881 03:47:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.881 03:47:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.881 03:47:18 -- paths/export.sh@5 -- # export PATH 00:03:43.881 03:47:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.881 03:47:18 -- nvmf/common.sh@46 -- # : 0 00:03:43.881 03:47:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:43.881 03:47:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:43.881 03:47:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:43.881 03:47:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:43.881 03:47:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:43.881 03:47:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:43.881 03:47:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:43.881 03:47:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:43.881 03:47:18 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:43.881 03:47:18 -- spdk/autotest.sh@32 -- # uname -s 00:03:43.881 03:47:18 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:43.881 03:47:18 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:43.881 03:47:18 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:43.881 03:47:18 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:43.881 03:47:18 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:43.881 03:47:18 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:43.881 03:47:18 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:43.881 03:47:18 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:43.881 03:47:18 -- spdk/autotest.sh@48 -- # udevadm_pid=49796 00:03:43.881 03:47:18 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:43.881 03:47:18 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:03:43.881 03:47:18 -- spdk/autotest.sh@54 -- # echo 49810 00:03:43.881 03:47:18 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:43.881 03:47:18 -- spdk/autotest.sh@56 -- # echo 49813 00:03:43.881 03:47:18 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:43.881 03:47:18 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:03:43.881 03:47:18 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:43.881 03:47:18 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:43.881 03:47:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:43.881 03:47:18 -- common/autotest_common.sh@10 -- # set +x 00:03:43.881 03:47:18 -- spdk/autotest.sh@70 -- # create_test_list 00:03:43.881 03:47:18 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:43.881 03:47:18 -- common/autotest_common.sh@10 -- # set +x 00:03:43.881 03:47:18 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:43.881 03:47:18 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:43.881 03:47:18 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:43.881 03:47:18 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:43.881 03:47:18 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:43.881 03:47:18 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:43.881 03:47:18 -- common/autotest_common.sh@1450 -- # uname 00:03:43.881 03:47:18 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:03:43.881 03:47:18 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:43.881 03:47:18 -- common/autotest_common.sh@1470 -- # uname 00:03:43.881 03:47:18 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:03:43.881 03:47:18 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:03:43.881 03:47:18 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:43.881 lcov: LCOV version 1.15 00:03:43.881 03:47:18 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:52.001 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:52.001 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:52.001 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:52.001 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:52.001 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:52.001 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:10.085 03:47:44 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:04:10.085 03:47:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:10.085 03:47:44 -- common/autotest_common.sh@10 -- # set +x 00:04:10.085 03:47:44 -- spdk/autotest.sh@89 -- # rm -f 00:04:10.085 03:47:44 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:10.343 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:10.343 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:10.343 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:04:10.343 03:47:45 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:04:10.343 03:47:45 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:10.343 03:47:45 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:10.343 03:47:45 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:10.343 03:47:45 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:10.343 03:47:45 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:10.343 03:47:45 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:10.343 03:47:45 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:10.343 03:47:45 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:10.344 03:47:45 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:10.344 03:47:45 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:10.344 03:47:45 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:10.344 03:47:45 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:10.344 03:47:45 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:10.344 03:47:45 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:10.344 03:47:45 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:10.344 03:47:45 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:10.344 03:47:45 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:10.344 03:47:45 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:10.344 03:47:45 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:10.344 03:47:45 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:10.344 03:47:45 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:10.344 03:47:45 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:10.344 03:47:45 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:10.344 03:47:45 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:04:10.344 03:47:45 -- spdk/autotest.sh@108 -- # grep -v p 00:04:10.344 03:47:45 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:04:10.344 03:47:45 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:10.344 03:47:45 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:10.344 03:47:45 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:04:10.344 03:47:45 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:10.344 03:47:45 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:10.602 No valid GPT data, bailing 00:04:10.602 03:47:45 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:10.602 03:47:45 -- scripts/common.sh@393 -- # pt= 00:04:10.602 03:47:45 -- scripts/common.sh@394 -- # return 1 00:04:10.602 03:47:45 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:10.602 1+0 records in 00:04:10.602 1+0 records out 00:04:10.602 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0045061 s, 233 MB/s 00:04:10.602 03:47:45 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:10.602 03:47:45 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:10.602 03:47:45 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:04:10.602 03:47:45 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:04:10.602 03:47:45 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:10.602 No valid GPT data, bailing 00:04:10.602 03:47:45 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:10.602 03:47:45 -- scripts/common.sh@393 -- # pt= 00:04:10.602 03:47:45 -- scripts/common.sh@394 -- # return 1 00:04:10.602 03:47:45 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:10.602 1+0 records in 00:04:10.602 1+0 records out 00:04:10.602 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00467004 s, 225 MB/s 00:04:10.602 03:47:45 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:10.602 03:47:45 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:10.602 03:47:45 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:04:10.602 03:47:45 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:04:10.602 03:47:45 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:10.602 No valid GPT data, bailing 00:04:10.602 03:47:45 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:10.602 03:47:45 -- scripts/common.sh@393 -- # pt= 00:04:10.602 03:47:45 -- scripts/common.sh@394 -- # return 1 00:04:10.602 03:47:45 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:10.602 1+0 records in 00:04:10.602 1+0 records out 00:04:10.602 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00438446 s, 239 MB/s 00:04:10.602 03:47:45 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:10.602 03:47:45 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:10.602 03:47:45 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:04:10.602 03:47:45 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:04:10.602 03:47:45 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:10.861 No valid GPT data, bailing 00:04:10.861 03:47:45 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:10.861 03:47:45 -- scripts/common.sh@393 -- # pt= 00:04:10.861 03:47:45 -- scripts/common.sh@394 -- # return 1 00:04:10.861 03:47:45 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:10.861 1+0 records in 00:04:10.861 1+0 records out 00:04:10.861 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00397678 s, 264 MB/s 00:04:10.861 03:47:45 -- spdk/autotest.sh@116 -- # sync 00:04:10.861 03:47:45 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:10.861 03:47:45 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:10.861 03:47:45 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:12.761 03:47:47 -- spdk/autotest.sh@122 -- # uname -s 00:04:12.761 03:47:47 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:04:12.761 03:47:47 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:12.761 03:47:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:12.761 03:47:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:12.761 03:47:47 -- common/autotest_common.sh@10 -- # set +x 00:04:12.761 ************************************ 00:04:12.761 START TEST setup.sh 00:04:12.761 ************************************ 00:04:12.761 03:47:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:13.020 * Looking for test storage... 00:04:13.020 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:13.020 03:47:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:13.020 03:47:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:13.020 03:47:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:13.020 03:47:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:13.020 03:47:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:13.020 03:47:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:13.020 03:47:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:13.020 03:47:47 -- scripts/common.sh@335 -- # IFS=.-: 00:04:13.020 03:47:47 -- scripts/common.sh@335 -- # read -ra ver1 00:04:13.020 03:47:47 -- scripts/common.sh@336 -- # IFS=.-: 00:04:13.020 03:47:47 -- scripts/common.sh@336 -- # read -ra ver2 00:04:13.020 03:47:47 -- scripts/common.sh@337 -- # local 'op=<' 00:04:13.020 03:47:47 -- scripts/common.sh@339 -- # ver1_l=2 00:04:13.020 03:47:47 -- scripts/common.sh@340 -- # ver2_l=1 00:04:13.020 03:47:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:13.020 03:47:47 -- scripts/common.sh@343 -- # case "$op" in 00:04:13.020 03:47:47 -- scripts/common.sh@344 -- # : 1 00:04:13.020 03:47:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:13.020 03:47:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:13.020 03:47:47 -- scripts/common.sh@364 -- # decimal 1 00:04:13.020 03:47:47 -- scripts/common.sh@352 -- # local d=1 00:04:13.020 03:47:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:13.020 03:47:47 -- scripts/common.sh@354 -- # echo 1 00:04:13.020 03:47:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:13.020 03:47:47 -- scripts/common.sh@365 -- # decimal 2 00:04:13.020 03:47:47 -- scripts/common.sh@352 -- # local d=2 00:04:13.020 03:47:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:13.020 03:47:47 -- scripts/common.sh@354 -- # echo 2 00:04:13.020 03:47:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:13.020 03:47:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:13.020 03:47:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:13.020 03:47:47 -- scripts/common.sh@367 -- # return 0 00:04:13.020 03:47:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:13.020 03:47:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:13.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.020 --rc genhtml_branch_coverage=1 00:04:13.020 --rc genhtml_function_coverage=1 00:04:13.020 --rc genhtml_legend=1 00:04:13.020 --rc geninfo_all_blocks=1 00:04:13.020 --rc geninfo_unexecuted_blocks=1 00:04:13.020 00:04:13.020 ' 00:04:13.020 03:47:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:13.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.020 --rc genhtml_branch_coverage=1 00:04:13.020 --rc genhtml_function_coverage=1 00:04:13.020 --rc genhtml_legend=1 00:04:13.020 --rc geninfo_all_blocks=1 00:04:13.020 --rc geninfo_unexecuted_blocks=1 00:04:13.020 00:04:13.020 ' 00:04:13.020 03:47:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:13.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.020 --rc genhtml_branch_coverage=1 00:04:13.020 --rc genhtml_function_coverage=1 00:04:13.020 --rc genhtml_legend=1 00:04:13.020 --rc geninfo_all_blocks=1 00:04:13.020 --rc geninfo_unexecuted_blocks=1 00:04:13.020 00:04:13.020 ' 00:04:13.020 03:47:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:13.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.020 --rc genhtml_branch_coverage=1 00:04:13.020 --rc genhtml_function_coverage=1 00:04:13.020 --rc genhtml_legend=1 00:04:13.020 --rc geninfo_all_blocks=1 00:04:13.020 --rc geninfo_unexecuted_blocks=1 00:04:13.020 00:04:13.020 ' 00:04:13.020 03:47:47 -- setup/test-setup.sh@10 -- # uname -s 00:04:13.020 03:47:47 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:13.020 03:47:47 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:13.020 03:47:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:13.020 03:47:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:13.020 03:47:47 -- common/autotest_common.sh@10 -- # set +x 00:04:13.020 ************************************ 00:04:13.020 START TEST acl 00:04:13.020 ************************************ 00:04:13.020 03:47:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:13.020 * Looking for test storage... 00:04:13.020 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:13.021 03:47:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:13.021 03:47:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:13.021 03:47:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:13.279 03:47:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:13.279 03:47:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:13.279 03:47:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:13.279 03:47:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:13.279 03:47:48 -- scripts/common.sh@335 -- # IFS=.-: 00:04:13.279 03:47:48 -- scripts/common.sh@335 -- # read -ra ver1 00:04:13.279 03:47:48 -- scripts/common.sh@336 -- # IFS=.-: 00:04:13.279 03:47:48 -- scripts/common.sh@336 -- # read -ra ver2 00:04:13.279 03:47:48 -- scripts/common.sh@337 -- # local 'op=<' 00:04:13.279 03:47:48 -- scripts/common.sh@339 -- # ver1_l=2 00:04:13.279 03:47:48 -- scripts/common.sh@340 -- # ver2_l=1 00:04:13.279 03:47:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:13.279 03:47:48 -- scripts/common.sh@343 -- # case "$op" in 00:04:13.279 03:47:48 -- scripts/common.sh@344 -- # : 1 00:04:13.279 03:47:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:13.279 03:47:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:13.279 03:47:48 -- scripts/common.sh@364 -- # decimal 1 00:04:13.279 03:47:48 -- scripts/common.sh@352 -- # local d=1 00:04:13.279 03:47:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:13.279 03:47:48 -- scripts/common.sh@354 -- # echo 1 00:04:13.279 03:47:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:13.279 03:47:48 -- scripts/common.sh@365 -- # decimal 2 00:04:13.279 03:47:48 -- scripts/common.sh@352 -- # local d=2 00:04:13.279 03:47:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:13.279 03:47:48 -- scripts/common.sh@354 -- # echo 2 00:04:13.279 03:47:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:13.279 03:47:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:13.279 03:47:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:13.279 03:47:48 -- scripts/common.sh@367 -- # return 0 00:04:13.279 03:47:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:13.279 03:47:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:13.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.279 --rc genhtml_branch_coverage=1 00:04:13.279 --rc genhtml_function_coverage=1 00:04:13.279 --rc genhtml_legend=1 00:04:13.279 --rc geninfo_all_blocks=1 00:04:13.279 --rc geninfo_unexecuted_blocks=1 00:04:13.279 00:04:13.279 ' 00:04:13.279 03:47:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:13.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.279 --rc genhtml_branch_coverage=1 00:04:13.279 --rc genhtml_function_coverage=1 00:04:13.279 --rc genhtml_legend=1 00:04:13.279 --rc geninfo_all_blocks=1 00:04:13.279 --rc geninfo_unexecuted_blocks=1 00:04:13.279 00:04:13.279 ' 00:04:13.279 03:47:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:13.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.279 --rc genhtml_branch_coverage=1 00:04:13.279 --rc genhtml_function_coverage=1 00:04:13.279 --rc genhtml_legend=1 00:04:13.279 --rc geninfo_all_blocks=1 00:04:13.279 --rc geninfo_unexecuted_blocks=1 00:04:13.279 00:04:13.279 ' 00:04:13.279 03:47:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:13.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.279 --rc genhtml_branch_coverage=1 00:04:13.279 --rc genhtml_function_coverage=1 00:04:13.279 --rc genhtml_legend=1 00:04:13.279 --rc geninfo_all_blocks=1 00:04:13.279 --rc geninfo_unexecuted_blocks=1 00:04:13.279 00:04:13.279 ' 00:04:13.279 03:47:48 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:13.279 03:47:48 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:13.279 03:47:48 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:13.279 03:47:48 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:13.279 03:47:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:13.279 03:47:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:13.279 03:47:48 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:13.279 03:47:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:13.279 03:47:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:13.279 03:47:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:13.279 03:47:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:13.279 03:47:48 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:13.279 03:47:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:13.279 03:47:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:13.279 03:47:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:13.279 03:47:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:13.279 03:47:48 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:13.279 03:47:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:13.279 03:47:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:13.279 03:47:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:13.279 03:47:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:13.279 03:47:48 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:13.279 03:47:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:13.279 03:47:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:13.280 03:47:48 -- setup/acl.sh@12 -- # devs=() 00:04:13.280 03:47:48 -- setup/acl.sh@12 -- # declare -a devs 00:04:13.280 03:47:48 -- setup/acl.sh@13 -- # drivers=() 00:04:13.280 03:47:48 -- setup/acl.sh@13 -- # declare -A drivers 00:04:13.280 03:47:48 -- setup/acl.sh@51 -- # setup reset 00:04:13.280 03:47:48 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:13.280 03:47:48 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:13.847 03:47:48 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:13.847 03:47:48 -- setup/acl.sh@16 -- # local dev driver 00:04:13.847 03:47:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:13.847 03:47:48 -- setup/acl.sh@15 -- # setup output status 00:04:13.847 03:47:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.847 03:47:48 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:14.106 Hugepages 00:04:14.106 node hugesize free / total 00:04:14.106 03:47:49 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:14.106 03:47:49 -- setup/acl.sh@19 -- # continue 00:04:14.106 03:47:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.106 00:04:14.106 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:14.106 03:47:49 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:14.106 03:47:49 -- setup/acl.sh@19 -- # continue 00:04:14.106 03:47:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.106 03:47:49 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:14.106 03:47:49 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:14.106 03:47:49 -- setup/acl.sh@20 -- # continue 00:04:14.106 03:47:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.365 03:47:49 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:14.365 03:47:49 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:14.365 03:47:49 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:14.365 03:47:49 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:14.365 03:47:49 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:14.365 03:47:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.365 03:47:49 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:04:14.365 03:47:49 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:14.365 03:47:49 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:14.365 03:47:49 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:14.365 03:47:49 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:14.365 03:47:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.365 03:47:49 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:14.365 03:47:49 -- setup/acl.sh@54 -- # run_test denied denied 00:04:14.366 03:47:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:14.366 03:47:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:14.366 03:47:49 -- common/autotest_common.sh@10 -- # set +x 00:04:14.366 ************************************ 00:04:14.366 START TEST denied 00:04:14.366 ************************************ 00:04:14.366 03:47:49 -- common/autotest_common.sh@1114 -- # denied 00:04:14.366 03:47:49 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:14.366 03:47:49 -- setup/acl.sh@38 -- # setup output config 00:04:14.366 03:47:49 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:14.366 03:47:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.366 03:47:49 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:15.300 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:15.300 03:47:50 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:15.300 03:47:50 -- setup/acl.sh@28 -- # local dev driver 00:04:15.300 03:47:50 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:15.300 03:47:50 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:15.300 03:47:50 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:15.300 03:47:50 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:15.300 03:47:50 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:15.300 03:47:50 -- setup/acl.sh@41 -- # setup reset 00:04:15.300 03:47:50 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:15.300 03:47:50 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:15.869 00:04:15.869 real 0m1.451s 00:04:15.869 user 0m0.582s 00:04:15.869 sys 0m0.827s 00:04:15.869 03:47:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:15.869 ************************************ 00:04:15.869 03:47:50 -- common/autotest_common.sh@10 -- # set +x 00:04:15.869 END TEST denied 00:04:15.869 ************************************ 00:04:15.869 03:47:50 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:15.869 03:47:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:15.869 03:47:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:15.869 03:47:50 -- common/autotest_common.sh@10 -- # set +x 00:04:15.869 ************************************ 00:04:15.869 START TEST allowed 00:04:15.869 ************************************ 00:04:15.869 03:47:50 -- common/autotest_common.sh@1114 -- # allowed 00:04:15.869 03:47:50 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:15.869 03:47:50 -- setup/acl.sh@45 -- # setup output config 00:04:15.869 03:47:50 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:15.869 03:47:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.869 03:47:50 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:16.805 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:16.805 03:47:51 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:04:16.805 03:47:51 -- setup/acl.sh@28 -- # local dev driver 00:04:16.805 03:47:51 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:16.805 03:47:51 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:04:16.805 03:47:51 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:04:16.805 03:47:51 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:16.805 03:47:51 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:16.805 03:47:51 -- setup/acl.sh@48 -- # setup reset 00:04:16.805 03:47:51 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:16.805 03:47:51 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:17.388 00:04:17.388 real 0m1.542s 00:04:17.388 user 0m0.694s 00:04:17.388 sys 0m0.852s 00:04:17.388 03:47:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:17.388 03:47:52 -- common/autotest_common.sh@10 -- # set +x 00:04:17.388 ************************************ 00:04:17.388 END TEST allowed 00:04:17.388 ************************************ 00:04:17.388 00:04:17.388 real 0m4.436s 00:04:17.388 user 0m1.927s 00:04:17.388 sys 0m2.490s 00:04:17.388 03:47:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:17.388 03:47:52 -- common/autotest_common.sh@10 -- # set +x 00:04:17.388 ************************************ 00:04:17.388 END TEST acl 00:04:17.388 ************************************ 00:04:17.388 03:47:52 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:17.388 03:47:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:17.388 03:47:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:17.388 03:47:52 -- common/autotest_common.sh@10 -- # set +x 00:04:17.388 ************************************ 00:04:17.388 START TEST hugepages 00:04:17.388 ************************************ 00:04:17.388 03:47:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:17.648 * Looking for test storage... 00:04:17.648 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:17.648 03:47:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:17.648 03:47:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:17.648 03:47:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:17.648 03:47:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:17.648 03:47:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:17.648 03:47:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:17.648 03:47:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:17.648 03:47:52 -- scripts/common.sh@335 -- # IFS=.-: 00:04:17.648 03:47:52 -- scripts/common.sh@335 -- # read -ra ver1 00:04:17.648 03:47:52 -- scripts/common.sh@336 -- # IFS=.-: 00:04:17.648 03:47:52 -- scripts/common.sh@336 -- # read -ra ver2 00:04:17.648 03:47:52 -- scripts/common.sh@337 -- # local 'op=<' 00:04:17.648 03:47:52 -- scripts/common.sh@339 -- # ver1_l=2 00:04:17.648 03:47:52 -- scripts/common.sh@340 -- # ver2_l=1 00:04:17.648 03:47:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:17.648 03:47:52 -- scripts/common.sh@343 -- # case "$op" in 00:04:17.648 03:47:52 -- scripts/common.sh@344 -- # : 1 00:04:17.648 03:47:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:17.648 03:47:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:17.648 03:47:52 -- scripts/common.sh@364 -- # decimal 1 00:04:17.648 03:47:52 -- scripts/common.sh@352 -- # local d=1 00:04:17.648 03:47:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:17.648 03:47:52 -- scripts/common.sh@354 -- # echo 1 00:04:17.648 03:47:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:17.648 03:47:52 -- scripts/common.sh@365 -- # decimal 2 00:04:17.648 03:47:52 -- scripts/common.sh@352 -- # local d=2 00:04:17.648 03:47:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:17.648 03:47:52 -- scripts/common.sh@354 -- # echo 2 00:04:17.648 03:47:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:17.648 03:47:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:17.648 03:47:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:17.648 03:47:52 -- scripts/common.sh@367 -- # return 0 00:04:17.648 03:47:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:17.649 03:47:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:17.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.649 --rc genhtml_branch_coverage=1 00:04:17.649 --rc genhtml_function_coverage=1 00:04:17.649 --rc genhtml_legend=1 00:04:17.649 --rc geninfo_all_blocks=1 00:04:17.649 --rc geninfo_unexecuted_blocks=1 00:04:17.649 00:04:17.649 ' 00:04:17.649 03:47:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:17.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.649 --rc genhtml_branch_coverage=1 00:04:17.649 --rc genhtml_function_coverage=1 00:04:17.649 --rc genhtml_legend=1 00:04:17.649 --rc geninfo_all_blocks=1 00:04:17.649 --rc geninfo_unexecuted_blocks=1 00:04:17.649 00:04:17.649 ' 00:04:17.649 03:47:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:17.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.649 --rc genhtml_branch_coverage=1 00:04:17.649 --rc genhtml_function_coverage=1 00:04:17.649 --rc genhtml_legend=1 00:04:17.649 --rc geninfo_all_blocks=1 00:04:17.649 --rc geninfo_unexecuted_blocks=1 00:04:17.649 00:04:17.649 ' 00:04:17.649 03:47:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:17.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.649 --rc genhtml_branch_coverage=1 00:04:17.649 --rc genhtml_function_coverage=1 00:04:17.649 --rc genhtml_legend=1 00:04:17.649 --rc geninfo_all_blocks=1 00:04:17.649 --rc geninfo_unexecuted_blocks=1 00:04:17.649 00:04:17.649 ' 00:04:17.649 03:47:52 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:17.649 03:47:52 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:17.649 03:47:52 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:17.649 03:47:52 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:17.649 03:47:52 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:17.649 03:47:52 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:17.649 03:47:52 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:17.649 03:47:52 -- setup/common.sh@18 -- # local node= 00:04:17.649 03:47:52 -- setup/common.sh@19 -- # local var val 00:04:17.649 03:47:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.649 03:47:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.649 03:47:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.649 03:47:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.649 03:47:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.649 03:47:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.649 03:47:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 5838596 kB' 'MemAvailable: 7350156 kB' 'Buffers: 3448 kB' 'Cached: 1721480 kB' 'SwapCached: 0 kB' 'Active: 496644 kB' 'Inactive: 1345260 kB' 'Active(anon): 127488 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1345260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 118880 kB' 'Mapped: 51120 kB' 'Shmem: 10508 kB' 'KReclaimable: 68184 kB' 'Slab: 164136 kB' 'SReclaimable: 68184 kB' 'SUnreclaim: 95952 kB' 'KernelStack: 6576 kB' 'PageTables: 4636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411004 kB' 'Committed_AS: 321232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55224 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.649 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.649 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # continue 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.650 03:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.650 03:47:52 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.650 03:47:52 -- setup/common.sh@33 -- # echo 2048 00:04:17.650 03:47:52 -- setup/common.sh@33 -- # return 0 00:04:17.650 03:47:52 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:17.650 03:47:52 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:17.650 03:47:52 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:17.650 03:47:52 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:17.650 03:47:52 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:17.650 03:47:52 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:17.650 03:47:52 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:17.650 03:47:52 -- setup/hugepages.sh@207 -- # get_nodes 00:04:17.650 03:47:52 -- setup/hugepages.sh@27 -- # local node 00:04:17.650 03:47:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.650 03:47:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:17.650 03:47:52 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:17.650 03:47:52 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:17.650 03:47:52 -- setup/hugepages.sh@208 -- # clear_hp 00:04:17.650 03:47:52 -- setup/hugepages.sh@37 -- # local node hp 00:04:17.650 03:47:52 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:17.650 03:47:52 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:17.650 03:47:52 -- setup/hugepages.sh@41 -- # echo 0 00:04:17.650 03:47:52 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:17.650 03:47:52 -- setup/hugepages.sh@41 -- # echo 0 00:04:17.650 03:47:52 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:17.650 03:47:52 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:17.650 03:47:52 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:17.650 03:47:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:17.650 03:47:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:17.650 03:47:52 -- common/autotest_common.sh@10 -- # set +x 00:04:17.650 ************************************ 00:04:17.650 START TEST default_setup 00:04:17.650 ************************************ 00:04:17.650 03:47:52 -- common/autotest_common.sh@1114 -- # default_setup 00:04:17.650 03:47:52 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:17.650 03:47:52 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:17.650 03:47:52 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:17.650 03:47:52 -- setup/hugepages.sh@51 -- # shift 00:04:17.650 03:47:52 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:17.650 03:47:52 -- setup/hugepages.sh@52 -- # local node_ids 00:04:17.650 03:47:52 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:17.650 03:47:52 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:17.650 03:47:52 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:17.650 03:47:52 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:17.650 03:47:52 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:17.650 03:47:52 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:17.650 03:47:52 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:17.650 03:47:52 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:17.650 03:47:52 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:17.650 03:47:52 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:17.650 03:47:52 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:17.651 03:47:52 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:17.651 03:47:52 -- setup/hugepages.sh@73 -- # return 0 00:04:17.651 03:47:52 -- setup/hugepages.sh@137 -- # setup output 00:04:17.651 03:47:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.651 03:47:52 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:18.589 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:18.589 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:18.589 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:18.589 03:47:53 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:18.589 03:47:53 -- setup/hugepages.sh@89 -- # local node 00:04:18.589 03:47:53 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:18.589 03:47:53 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:18.589 03:47:53 -- setup/hugepages.sh@92 -- # local surp 00:04:18.589 03:47:53 -- setup/hugepages.sh@93 -- # local resv 00:04:18.589 03:47:53 -- setup/hugepages.sh@94 -- # local anon 00:04:18.589 03:47:53 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:18.589 03:47:53 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:18.589 03:47:53 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:18.589 03:47:53 -- setup/common.sh@18 -- # local node= 00:04:18.589 03:47:53 -- setup/common.sh@19 -- # local var val 00:04:18.589 03:47:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:18.589 03:47:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.589 03:47:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.589 03:47:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.589 03:47:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.589 03:47:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.589 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.589 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.590 03:47:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7953564 kB' 'MemAvailable: 9465112 kB' 'Buffers: 3448 kB' 'Cached: 1721472 kB' 'SwapCached: 0 kB' 'Active: 498432 kB' 'Inactive: 1345272 kB' 'Active(anon): 129276 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1345272 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'AnonPages: 120420 kB' 'Mapped: 50848 kB' 'Shmem: 10488 kB' 'KReclaimable: 68132 kB' 'Slab: 164076 kB' 'SReclaimable: 68132 kB' 'SUnreclaim: 95944 kB' 'KernelStack: 6576 kB' 'PageTables: 4696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 321976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55272 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.590 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.590 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.591 03:47:53 -- setup/common.sh@33 -- # echo 0 00:04:18.591 03:47:53 -- setup/common.sh@33 -- # return 0 00:04:18.591 03:47:53 -- setup/hugepages.sh@97 -- # anon=0 00:04:18.591 03:47:53 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:18.591 03:47:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.591 03:47:53 -- setup/common.sh@18 -- # local node= 00:04:18.591 03:47:53 -- setup/common.sh@19 -- # local var val 00:04:18.591 03:47:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:18.591 03:47:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.591 03:47:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.591 03:47:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.591 03:47:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.591 03:47:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.591 03:47:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7953316 kB' 'MemAvailable: 9464740 kB' 'Buffers: 3448 kB' 'Cached: 1721468 kB' 'SwapCached: 0 kB' 'Active: 497884 kB' 'Inactive: 1345276 kB' 'Active(anon): 128728 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'AnonPages: 119856 kB' 'Mapped: 50848 kB' 'Shmem: 10484 kB' 'KReclaimable: 67876 kB' 'Slab: 163848 kB' 'SReclaimable: 67876 kB' 'SUnreclaim: 95972 kB' 'KernelStack: 6528 kB' 'PageTables: 4540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55208 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.591 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.591 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.592 03:47:53 -- setup/common.sh@33 -- # echo 0 00:04:18.592 03:47:53 -- setup/common.sh@33 -- # return 0 00:04:18.592 03:47:53 -- setup/hugepages.sh@99 -- # surp=0 00:04:18.592 03:47:53 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:18.592 03:47:53 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:18.592 03:47:53 -- setup/common.sh@18 -- # local node= 00:04:18.592 03:47:53 -- setup/common.sh@19 -- # local var val 00:04:18.592 03:47:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:18.592 03:47:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.592 03:47:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.592 03:47:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.592 03:47:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.592 03:47:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.592 03:47:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7953064 kB' 'MemAvailable: 9464488 kB' 'Buffers: 3448 kB' 'Cached: 1721468 kB' 'SwapCached: 0 kB' 'Active: 497932 kB' 'Inactive: 1345276 kB' 'Active(anon): 128776 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'AnonPages: 119880 kB' 'Mapped: 50848 kB' 'Shmem: 10484 kB' 'KReclaimable: 67876 kB' 'Slab: 163836 kB' 'SReclaimable: 67876 kB' 'SUnreclaim: 95960 kB' 'KernelStack: 6512 kB' 'PageTables: 4484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.592 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.592 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.593 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.593 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.854 03:47:53 -- setup/common.sh@33 -- # echo 0 00:04:18.854 03:47:53 -- setup/common.sh@33 -- # return 0 00:04:18.854 nr_hugepages=1024 00:04:18.854 03:47:53 -- setup/hugepages.sh@100 -- # resv=0 00:04:18.854 03:47:53 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:18.854 resv_hugepages=0 00:04:18.854 03:47:53 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:18.854 surplus_hugepages=0 00:04:18.854 anon_hugepages=0 00:04:18.854 03:47:53 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:18.854 03:47:53 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:18.854 03:47:53 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:18.854 03:47:53 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:18.854 03:47:53 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:18.854 03:47:53 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:18.854 03:47:53 -- setup/common.sh@18 -- # local node= 00:04:18.854 03:47:53 -- setup/common.sh@19 -- # local var val 00:04:18.854 03:47:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:18.854 03:47:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.854 03:47:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.854 03:47:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.854 03:47:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.854 03:47:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.854 03:47:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7953316 kB' 'MemAvailable: 9464740 kB' 'Buffers: 3448 kB' 'Cached: 1721468 kB' 'SwapCached: 0 kB' 'Active: 497816 kB' 'Inactive: 1345276 kB' 'Active(anon): 128660 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'AnonPages: 119772 kB' 'Mapped: 50720 kB' 'Shmem: 10484 kB' 'KReclaimable: 67876 kB' 'Slab: 163828 kB' 'SReclaimable: 67876 kB' 'SUnreclaim: 95952 kB' 'KernelStack: 6496 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.854 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.854 03:47:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.855 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.855 03:47:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.855 03:47:53 -- setup/common.sh@33 -- # echo 1024 00:04:18.855 03:47:53 -- setup/common.sh@33 -- # return 0 00:04:18.855 03:47:53 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:18.855 03:47:53 -- setup/hugepages.sh@112 -- # get_nodes 00:04:18.855 03:47:53 -- setup/hugepages.sh@27 -- # local node 00:04:18.855 03:47:53 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.855 03:47:53 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:18.855 03:47:53 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:18.855 03:47:53 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:18.855 03:47:53 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:18.856 03:47:53 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:18.856 03:47:53 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:18.856 03:47:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.856 03:47:53 -- setup/common.sh@18 -- # local node=0 00:04:18.856 03:47:53 -- setup/common.sh@19 -- # local var val 00:04:18.856 03:47:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:18.856 03:47:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.856 03:47:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:18.856 03:47:53 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:18.856 03:47:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.856 03:47:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.856 03:47:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7955892 kB' 'MemUsed: 4283216 kB' 'SwapCached: 0 kB' 'Active: 497752 kB' 'Inactive: 1345276 kB' 'Active(anon): 128596 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'FilePages: 1724916 kB' 'Mapped: 50720 kB' 'AnonPages: 119740 kB' 'Shmem: 10484 kB' 'KernelStack: 6512 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67876 kB' 'Slab: 163828 kB' 'SReclaimable: 67876 kB' 'SUnreclaim: 95952 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.856 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.856 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.857 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.857 03:47:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.857 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.857 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.857 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.857 03:47:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.857 03:47:53 -- setup/common.sh@32 -- # continue 00:04:18.857 03:47:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.857 03:47:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.857 03:47:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.857 03:47:53 -- setup/common.sh@33 -- # echo 0 00:04:18.857 03:47:53 -- setup/common.sh@33 -- # return 0 00:04:18.857 03:47:53 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:18.857 03:47:53 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:18.857 03:47:53 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:18.857 node0=1024 expecting 1024 00:04:18.857 ************************************ 00:04:18.857 END TEST default_setup 00:04:18.857 ************************************ 00:04:18.857 03:47:53 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:18.857 03:47:53 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:18.857 03:47:53 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:18.857 00:04:18.857 real 0m1.034s 00:04:18.857 user 0m0.461s 00:04:18.857 sys 0m0.495s 00:04:18.857 03:47:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:18.857 03:47:53 -- common/autotest_common.sh@10 -- # set +x 00:04:18.857 03:47:53 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:18.857 03:47:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:18.857 03:47:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:18.857 03:47:53 -- common/autotest_common.sh@10 -- # set +x 00:04:18.857 ************************************ 00:04:18.857 START TEST per_node_1G_alloc 00:04:18.857 ************************************ 00:04:18.857 03:47:53 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:04:18.857 03:47:53 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:18.857 03:47:53 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:18.857 03:47:53 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:18.857 03:47:53 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:18.857 03:47:53 -- setup/hugepages.sh@51 -- # shift 00:04:18.857 03:47:53 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:18.857 03:47:53 -- setup/hugepages.sh@52 -- # local node_ids 00:04:18.857 03:47:53 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:18.857 03:47:53 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:18.857 03:47:53 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:18.857 03:47:53 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:18.857 03:47:53 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:18.857 03:47:53 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:18.857 03:47:53 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:18.857 03:47:53 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:18.857 03:47:53 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:18.857 03:47:53 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:18.857 03:47:53 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:18.857 03:47:53 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:18.857 03:47:53 -- setup/hugepages.sh@73 -- # return 0 00:04:18.857 03:47:53 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:18.857 03:47:53 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:18.857 03:47:53 -- setup/hugepages.sh@146 -- # setup output 00:04:18.857 03:47:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.857 03:47:53 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:19.115 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:19.115 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:19.115 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:19.115 03:47:54 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:19.115 03:47:54 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:19.115 03:47:54 -- setup/hugepages.sh@89 -- # local node 00:04:19.115 03:47:54 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:19.115 03:47:54 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:19.115 03:47:54 -- setup/hugepages.sh@92 -- # local surp 00:04:19.115 03:47:54 -- setup/hugepages.sh@93 -- # local resv 00:04:19.115 03:47:54 -- setup/hugepages.sh@94 -- # local anon 00:04:19.115 03:47:54 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:19.115 03:47:54 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:19.115 03:47:54 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:19.115 03:47:54 -- setup/common.sh@18 -- # local node= 00:04:19.115 03:47:54 -- setup/common.sh@19 -- # local var val 00:04:19.115 03:47:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:19.115 03:47:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.115 03:47:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.115 03:47:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.115 03:47:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.115 03:47:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 03:47:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 9001644 kB' 'MemAvailable: 10513068 kB' 'Buffers: 3448 kB' 'Cached: 1721468 kB' 'SwapCached: 0 kB' 'Active: 498056 kB' 'Inactive: 1345276 kB' 'Active(anon): 128900 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'AnonPages: 120004 kB' 'Mapped: 50828 kB' 'Shmem: 10484 kB' 'KReclaimable: 67876 kB' 'Slab: 163844 kB' 'SReclaimable: 67876 kB' 'SUnreclaim: 95968 kB' 'KernelStack: 6552 kB' 'PageTables: 4692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.378 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.379 03:47:54 -- setup/common.sh@33 -- # echo 0 00:04:19.379 03:47:54 -- setup/common.sh@33 -- # return 0 00:04:19.379 03:47:54 -- setup/hugepages.sh@97 -- # anon=0 00:04:19.379 03:47:54 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:19.379 03:47:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.379 03:47:54 -- setup/common.sh@18 -- # local node= 00:04:19.379 03:47:54 -- setup/common.sh@19 -- # local var val 00:04:19.379 03:47:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:19.379 03:47:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.379 03:47:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.379 03:47:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.379 03:47:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.379 03:47:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.379 03:47:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 9001644 kB' 'MemAvailable: 10513068 kB' 'Buffers: 3448 kB' 'Cached: 1721468 kB' 'SwapCached: 0 kB' 'Active: 497832 kB' 'Inactive: 1345276 kB' 'Active(anon): 128676 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'AnonPages: 119780 kB' 'Mapped: 50720 kB' 'Shmem: 10484 kB' 'KReclaimable: 67876 kB' 'Slab: 163836 kB' 'SReclaimable: 67876 kB' 'SUnreclaim: 95960 kB' 'KernelStack: 6512 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55240 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.379 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.379 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.380 03:47:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.380 03:47:54 -- setup/common.sh@33 -- # echo 0 00:04:19.380 03:47:54 -- setup/common.sh@33 -- # return 0 00:04:19.380 03:47:54 -- setup/hugepages.sh@99 -- # surp=0 00:04:19.380 03:47:54 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:19.380 03:47:54 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:19.380 03:47:54 -- setup/common.sh@18 -- # local node= 00:04:19.380 03:47:54 -- setup/common.sh@19 -- # local var val 00:04:19.380 03:47:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:19.380 03:47:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.380 03:47:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.380 03:47:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.380 03:47:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.380 03:47:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.380 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 9000888 kB' 'MemAvailable: 10512312 kB' 'Buffers: 3448 kB' 'Cached: 1721468 kB' 'SwapCached: 0 kB' 'Active: 497796 kB' 'Inactive: 1345276 kB' 'Active(anon): 128640 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'AnonPages: 119492 kB' 'Mapped: 50720 kB' 'Shmem: 10484 kB' 'KReclaimable: 67876 kB' 'Slab: 163816 kB' 'SReclaimable: 67876 kB' 'SUnreclaim: 95940 kB' 'KernelStack: 6544 kB' 'PageTables: 4584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.381 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.381 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.382 03:47:54 -- setup/common.sh@33 -- # echo 0 00:04:19.382 03:47:54 -- setup/common.sh@33 -- # return 0 00:04:19.382 03:47:54 -- setup/hugepages.sh@100 -- # resv=0 00:04:19.382 03:47:54 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:19.382 nr_hugepages=512 00:04:19.382 03:47:54 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:19.382 resv_hugepages=0 00:04:19.382 03:47:54 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:19.382 surplus_hugepages=0 00:04:19.382 03:47:54 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:19.382 anon_hugepages=0 00:04:19.382 03:47:54 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:19.382 03:47:54 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:19.382 03:47:54 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:19.382 03:47:54 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:19.382 03:47:54 -- setup/common.sh@18 -- # local node= 00:04:19.382 03:47:54 -- setup/common.sh@19 -- # local var val 00:04:19.382 03:47:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:19.382 03:47:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.382 03:47:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.382 03:47:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.382 03:47:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.382 03:47:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.382 03:47:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 9000888 kB' 'MemAvailable: 10512312 kB' 'Buffers: 3448 kB' 'Cached: 1721468 kB' 'SwapCached: 0 kB' 'Active: 497816 kB' 'Inactive: 1345276 kB' 'Active(anon): 128660 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'AnonPages: 119812 kB' 'Mapped: 50720 kB' 'Shmem: 10484 kB' 'KReclaimable: 67876 kB' 'Slab: 163808 kB' 'SReclaimable: 67876 kB' 'SUnreclaim: 95932 kB' 'KernelStack: 6512 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55208 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.382 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.382 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.383 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.383 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.384 03:47:54 -- setup/common.sh@33 -- # echo 512 00:04:19.384 03:47:54 -- setup/common.sh@33 -- # return 0 00:04:19.384 03:47:54 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:19.384 03:47:54 -- setup/hugepages.sh@112 -- # get_nodes 00:04:19.384 03:47:54 -- setup/hugepages.sh@27 -- # local node 00:04:19.384 03:47:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.384 03:47:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:19.384 03:47:54 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:19.384 03:47:54 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:19.384 03:47:54 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:19.384 03:47:54 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:19.384 03:47:54 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:19.384 03:47:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.384 03:47:54 -- setup/common.sh@18 -- # local node=0 00:04:19.384 03:47:54 -- setup/common.sh@19 -- # local var val 00:04:19.384 03:47:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:19.384 03:47:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.384 03:47:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:19.384 03:47:54 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:19.384 03:47:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.384 03:47:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.384 03:47:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 9000888 kB' 'MemUsed: 3238220 kB' 'SwapCached: 0 kB' 'Active: 497780 kB' 'Inactive: 1345276 kB' 'Active(anon): 128624 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'FilePages: 1724916 kB' 'Mapped: 50720 kB' 'AnonPages: 119784 kB' 'Shmem: 10484 kB' 'KernelStack: 6512 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67876 kB' 'Slab: 163804 kB' 'SReclaimable: 67876 kB' 'SUnreclaim: 95928 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.384 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.384 03:47:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.385 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.385 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.385 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.385 03:47:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.385 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.385 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.385 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.385 03:47:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.385 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.385 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.385 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.385 03:47:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.385 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.385 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.385 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.385 03:47:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.385 03:47:54 -- setup/common.sh@33 -- # echo 0 00:04:19.385 03:47:54 -- setup/common.sh@33 -- # return 0 00:04:19.385 03:47:54 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:19.385 03:47:54 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:19.385 03:47:54 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:19.385 03:47:54 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:19.385 node0=512 expecting 512 00:04:19.385 03:47:54 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:19.385 ************************************ 00:04:19.385 END TEST per_node_1G_alloc 00:04:19.385 ************************************ 00:04:19.385 03:47:54 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:19.385 00:04:19.385 real 0m0.587s 00:04:19.385 user 0m0.279s 00:04:19.385 sys 0m0.304s 00:04:19.385 03:47:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:19.385 03:47:54 -- common/autotest_common.sh@10 -- # set +x 00:04:19.385 03:47:54 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:19.385 03:47:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:19.385 03:47:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:19.385 03:47:54 -- common/autotest_common.sh@10 -- # set +x 00:04:19.385 ************************************ 00:04:19.385 START TEST even_2G_alloc 00:04:19.385 ************************************ 00:04:19.385 03:47:54 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:04:19.385 03:47:54 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:19.385 03:47:54 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:19.385 03:47:54 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:19.385 03:47:54 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:19.385 03:47:54 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:19.385 03:47:54 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:19.385 03:47:54 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:19.385 03:47:54 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:19.385 03:47:54 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:19.385 03:47:54 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:19.385 03:47:54 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:19.385 03:47:54 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:19.385 03:47:54 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:19.385 03:47:54 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:19.385 03:47:54 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:19.385 03:47:54 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:19.385 03:47:54 -- setup/hugepages.sh@83 -- # : 0 00:04:19.385 03:47:54 -- setup/hugepages.sh@84 -- # : 0 00:04:19.385 03:47:54 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:19.385 03:47:54 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:19.385 03:47:54 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:19.385 03:47:54 -- setup/hugepages.sh@153 -- # setup output 00:04:19.385 03:47:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.385 03:47:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:19.956 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:19.956 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:19.956 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:19.956 03:47:54 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:19.956 03:47:54 -- setup/hugepages.sh@89 -- # local node 00:04:19.956 03:47:54 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:19.956 03:47:54 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:19.956 03:47:54 -- setup/hugepages.sh@92 -- # local surp 00:04:19.956 03:47:54 -- setup/hugepages.sh@93 -- # local resv 00:04:19.956 03:47:54 -- setup/hugepages.sh@94 -- # local anon 00:04:19.956 03:47:54 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:19.956 03:47:54 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:19.956 03:47:54 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:19.956 03:47:54 -- setup/common.sh@18 -- # local node= 00:04:19.956 03:47:54 -- setup/common.sh@19 -- # local var val 00:04:19.956 03:47:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:19.956 03:47:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.956 03:47:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.956 03:47:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.956 03:47:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.956 03:47:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.956 03:47:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7957016 kB' 'MemAvailable: 9468440 kB' 'Buffers: 3448 kB' 'Cached: 1721468 kB' 'SwapCached: 0 kB' 'Active: 498060 kB' 'Inactive: 1345276 kB' 'Active(anon): 128904 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'AnonPages: 120004 kB' 'Mapped: 50848 kB' 'Shmem: 10484 kB' 'KReclaimable: 67876 kB' 'Slab: 163808 kB' 'SReclaimable: 67876 kB' 'SUnreclaim: 95932 kB' 'KernelStack: 6504 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.956 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.956 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.957 03:47:54 -- setup/common.sh@33 -- # echo 0 00:04:19.957 03:47:54 -- setup/common.sh@33 -- # return 0 00:04:19.957 03:47:54 -- setup/hugepages.sh@97 -- # anon=0 00:04:19.957 03:47:54 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:19.957 03:47:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.957 03:47:54 -- setup/common.sh@18 -- # local node= 00:04:19.957 03:47:54 -- setup/common.sh@19 -- # local var val 00:04:19.957 03:47:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:19.957 03:47:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.957 03:47:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.957 03:47:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.957 03:47:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.957 03:47:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.957 03:47:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7957632 kB' 'MemAvailable: 9469056 kB' 'Buffers: 3448 kB' 'Cached: 1721468 kB' 'SwapCached: 0 kB' 'Active: 498024 kB' 'Inactive: 1345276 kB' 'Active(anon): 128868 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'AnonPages: 119956 kB' 'Mapped: 50848 kB' 'Shmem: 10484 kB' 'KReclaimable: 67876 kB' 'Slab: 163808 kB' 'SReclaimable: 67876 kB' 'SUnreclaim: 95932 kB' 'KernelStack: 6456 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55240 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.957 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.957 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.958 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.958 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.959 03:47:54 -- setup/common.sh@33 -- # echo 0 00:04:19.959 03:47:54 -- setup/common.sh@33 -- # return 0 00:04:19.959 03:47:54 -- setup/hugepages.sh@99 -- # surp=0 00:04:19.959 03:47:54 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:19.959 03:47:54 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:19.959 03:47:54 -- setup/common.sh@18 -- # local node= 00:04:19.959 03:47:54 -- setup/common.sh@19 -- # local var val 00:04:19.959 03:47:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:19.959 03:47:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.959 03:47:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.959 03:47:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.959 03:47:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.959 03:47:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.959 03:47:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7957992 kB' 'MemAvailable: 9469416 kB' 'Buffers: 3448 kB' 'Cached: 1721468 kB' 'SwapCached: 0 kB' 'Active: 497808 kB' 'Inactive: 1345276 kB' 'Active(anon): 128652 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'AnonPages: 119792 kB' 'Mapped: 50720 kB' 'Shmem: 10484 kB' 'KReclaimable: 67876 kB' 'Slab: 163828 kB' 'SReclaimable: 67876 kB' 'SUnreclaim: 95952 kB' 'KernelStack: 6512 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55240 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.959 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.959 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.960 03:47:54 -- setup/common.sh@33 -- # echo 0 00:04:19.960 03:47:54 -- setup/common.sh@33 -- # return 0 00:04:19.960 03:47:54 -- setup/hugepages.sh@100 -- # resv=0 00:04:19.960 nr_hugepages=1024 00:04:19.960 03:47:54 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:19.960 resv_hugepages=0 00:04:19.960 03:47:54 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:19.960 surplus_hugepages=0 00:04:19.960 03:47:54 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:19.960 anon_hugepages=0 00:04:19.960 03:47:54 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:19.960 03:47:54 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:19.960 03:47:54 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:19.960 03:47:54 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:19.960 03:47:54 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:19.960 03:47:54 -- setup/common.sh@18 -- # local node= 00:04:19.960 03:47:54 -- setup/common.sh@19 -- # local var val 00:04:19.960 03:47:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:19.960 03:47:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.960 03:47:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.960 03:47:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.960 03:47:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.960 03:47:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.960 03:47:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7957992 kB' 'MemAvailable: 9469416 kB' 'Buffers: 3448 kB' 'Cached: 1721468 kB' 'SwapCached: 0 kB' 'Active: 497776 kB' 'Inactive: 1345276 kB' 'Active(anon): 128620 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'AnonPages: 119712 kB' 'Mapped: 50720 kB' 'Shmem: 10484 kB' 'KReclaimable: 67876 kB' 'Slab: 163820 kB' 'SReclaimable: 67876 kB' 'SUnreclaim: 95944 kB' 'KernelStack: 6496 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55240 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.960 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.960 03:47:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.961 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.961 03:47:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.962 03:47:54 -- setup/common.sh@33 -- # echo 1024 00:04:19.962 03:47:54 -- setup/common.sh@33 -- # return 0 00:04:19.962 03:47:54 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:19.962 03:47:54 -- setup/hugepages.sh@112 -- # get_nodes 00:04:19.962 03:47:54 -- setup/hugepages.sh@27 -- # local node 00:04:19.962 03:47:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.962 03:47:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:19.962 03:47:54 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:19.962 03:47:54 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:19.962 03:47:54 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:19.962 03:47:54 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:19.962 03:47:54 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:19.962 03:47:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.962 03:47:54 -- setup/common.sh@18 -- # local node=0 00:04:19.962 03:47:54 -- setup/common.sh@19 -- # local var val 00:04:19.962 03:47:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:19.962 03:47:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.962 03:47:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:19.962 03:47:54 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:19.962 03:47:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.962 03:47:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 03:47:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7957992 kB' 'MemUsed: 4281116 kB' 'SwapCached: 0 kB' 'Active: 497876 kB' 'Inactive: 1345276 kB' 'Active(anon): 128720 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'FilePages: 1724916 kB' 'Mapped: 50720 kB' 'AnonPages: 119812 kB' 'Shmem: 10484 kB' 'KernelStack: 6496 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67876 kB' 'Slab: 163808 kB' 'SReclaimable: 67876 kB' 'SUnreclaim: 95932 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.962 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.962 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 03:47:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.963 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.963 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 03:47:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.963 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.963 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 03:47:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.963 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.963 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 03:47:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.963 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.963 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 03:47:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.963 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.963 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 03:47:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.963 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.963 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 03:47:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.963 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.963 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 03:47:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.963 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.963 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 03:47:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.963 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.963 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 03:47:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.963 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.963 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 03:47:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.963 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.963 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 03:47:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.963 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.963 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 03:47:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.963 03:47:54 -- setup/common.sh@32 -- # continue 00:04:19.963 03:47:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 03:47:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 03:47:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.963 03:47:54 -- setup/common.sh@33 -- # echo 0 00:04:19.963 03:47:54 -- setup/common.sh@33 -- # return 0 00:04:19.963 03:47:54 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:19.963 03:47:54 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:19.963 03:47:54 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:19.963 03:47:54 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:19.963 node0=1024 expecting 1024 00:04:19.963 03:47:54 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:19.963 03:47:54 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:19.963 00:04:19.963 real 0m0.531s 00:04:19.963 user 0m0.261s 00:04:19.963 sys 0m0.300s 00:04:19.963 03:47:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:19.963 03:47:54 -- common/autotest_common.sh@10 -- # set +x 00:04:19.963 ************************************ 00:04:19.963 END TEST even_2G_alloc 00:04:19.963 ************************************ 00:04:19.963 03:47:55 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:19.963 03:47:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:19.963 03:47:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:19.963 03:47:55 -- common/autotest_common.sh@10 -- # set +x 00:04:19.963 ************************************ 00:04:19.963 START TEST odd_alloc 00:04:19.963 ************************************ 00:04:19.963 03:47:55 -- common/autotest_common.sh@1114 -- # odd_alloc 00:04:19.963 03:47:55 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:19.963 03:47:55 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:19.963 03:47:55 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:19.963 03:47:55 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:19.963 03:47:55 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:19.963 03:47:55 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:19.963 03:47:55 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:19.963 03:47:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:19.963 03:47:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:19.963 03:47:55 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:19.963 03:47:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:19.963 03:47:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:19.963 03:47:55 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:19.963 03:47:55 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:19.963 03:47:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:19.963 03:47:55 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:19.963 03:47:55 -- setup/hugepages.sh@83 -- # : 0 00:04:19.963 03:47:55 -- setup/hugepages.sh@84 -- # : 0 00:04:19.963 03:47:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:19.963 03:47:55 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:19.963 03:47:55 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:19.963 03:47:55 -- setup/hugepages.sh@160 -- # setup output 00:04:19.963 03:47:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.963 03:47:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:20.533 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:20.533 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:20.533 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:20.533 03:47:55 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:20.533 03:47:55 -- setup/hugepages.sh@89 -- # local node 00:04:20.533 03:47:55 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:20.533 03:47:55 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:20.533 03:47:55 -- setup/hugepages.sh@92 -- # local surp 00:04:20.533 03:47:55 -- setup/hugepages.sh@93 -- # local resv 00:04:20.533 03:47:55 -- setup/hugepages.sh@94 -- # local anon 00:04:20.533 03:47:55 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:20.533 03:47:55 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:20.533 03:47:55 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:20.533 03:47:55 -- setup/common.sh@18 -- # local node= 00:04:20.533 03:47:55 -- setup/common.sh@19 -- # local var val 00:04:20.533 03:47:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:20.533 03:47:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.533 03:47:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.533 03:47:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.533 03:47:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.533 03:47:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.533 03:47:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7956916 kB' 'MemAvailable: 9468340 kB' 'Buffers: 3448 kB' 'Cached: 1721468 kB' 'SwapCached: 0 kB' 'Active: 498160 kB' 'Inactive: 1345276 kB' 'Active(anon): 129004 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120132 kB' 'Mapped: 50840 kB' 'Shmem: 10484 kB' 'KReclaimable: 67876 kB' 'Slab: 163824 kB' 'SReclaimable: 67876 kB' 'SUnreclaim: 95948 kB' 'KernelStack: 6568 kB' 'PageTables: 4524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.533 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.533 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.534 03:47:55 -- setup/common.sh@33 -- # echo 0 00:04:20.534 03:47:55 -- setup/common.sh@33 -- # return 0 00:04:20.534 03:47:55 -- setup/hugepages.sh@97 -- # anon=0 00:04:20.534 03:47:55 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:20.534 03:47:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.534 03:47:55 -- setup/common.sh@18 -- # local node= 00:04:20.534 03:47:55 -- setup/common.sh@19 -- # local var val 00:04:20.534 03:47:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:20.534 03:47:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.534 03:47:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.534 03:47:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.534 03:47:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.534 03:47:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.534 03:47:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7957168 kB' 'MemAvailable: 9468592 kB' 'Buffers: 3448 kB' 'Cached: 1721468 kB' 'SwapCached: 0 kB' 'Active: 497792 kB' 'Inactive: 1345276 kB' 'Active(anon): 128636 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119772 kB' 'Mapped: 50720 kB' 'Shmem: 10484 kB' 'KReclaimable: 67876 kB' 'Slab: 163836 kB' 'SReclaimable: 67876 kB' 'SUnreclaim: 95960 kB' 'KernelStack: 6512 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55224 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.534 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.534 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.535 03:47:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.535 03:47:55 -- setup/common.sh@33 -- # echo 0 00:04:20.535 03:47:55 -- setup/common.sh@33 -- # return 0 00:04:20.535 03:47:55 -- setup/hugepages.sh@99 -- # surp=0 00:04:20.535 03:47:55 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:20.535 03:47:55 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:20.535 03:47:55 -- setup/common.sh@18 -- # local node= 00:04:20.535 03:47:55 -- setup/common.sh@19 -- # local var val 00:04:20.535 03:47:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:20.535 03:47:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.535 03:47:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.535 03:47:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.535 03:47:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.535 03:47:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.535 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.536 03:47:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7957168 kB' 'MemAvailable: 9468592 kB' 'Buffers: 3448 kB' 'Cached: 1721468 kB' 'SwapCached: 0 kB' 'Active: 497700 kB' 'Inactive: 1345276 kB' 'Active(anon): 128544 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119668 kB' 'Mapped: 50720 kB' 'Shmem: 10484 kB' 'KReclaimable: 67876 kB' 'Slab: 163820 kB' 'SReclaimable: 67876 kB' 'SUnreclaim: 95944 kB' 'KernelStack: 6480 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55224 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.536 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.536 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.537 03:47:55 -- setup/common.sh@33 -- # echo 0 00:04:20.537 03:47:55 -- setup/common.sh@33 -- # return 0 00:04:20.537 03:47:55 -- setup/hugepages.sh@100 -- # resv=0 00:04:20.537 nr_hugepages=1025 00:04:20.537 03:47:55 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:20.537 resv_hugepages=0 00:04:20.537 03:47:55 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:20.537 surplus_hugepages=0 00:04:20.537 03:47:55 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:20.537 anon_hugepages=0 00:04:20.537 03:47:55 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:20.537 03:47:55 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:20.537 03:47:55 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:20.537 03:47:55 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:20.537 03:47:55 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:20.537 03:47:55 -- setup/common.sh@18 -- # local node= 00:04:20.537 03:47:55 -- setup/common.sh@19 -- # local var val 00:04:20.537 03:47:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:20.537 03:47:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.537 03:47:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.537 03:47:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.537 03:47:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.537 03:47:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.537 03:47:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7957168 kB' 'MemAvailable: 9468592 kB' 'Buffers: 3448 kB' 'Cached: 1721468 kB' 'SwapCached: 0 kB' 'Active: 497788 kB' 'Inactive: 1345276 kB' 'Active(anon): 128632 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119776 kB' 'Mapped: 50720 kB' 'Shmem: 10484 kB' 'KReclaimable: 67876 kB' 'Slab: 163820 kB' 'SReclaimable: 67876 kB' 'SUnreclaim: 95944 kB' 'KernelStack: 6512 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55240 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.537 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.537 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.538 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.538 03:47:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.538 03:47:55 -- setup/common.sh@33 -- # echo 1025 00:04:20.538 03:47:55 -- setup/common.sh@33 -- # return 0 00:04:20.538 03:47:55 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:20.538 03:47:55 -- setup/hugepages.sh@112 -- # get_nodes 00:04:20.538 03:47:55 -- setup/hugepages.sh@27 -- # local node 00:04:20.538 03:47:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:20.538 03:47:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:20.538 03:47:55 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:20.538 03:47:55 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:20.538 03:47:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:20.538 03:47:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:20.538 03:47:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:20.538 03:47:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.538 03:47:55 -- setup/common.sh@18 -- # local node=0 00:04:20.539 03:47:55 -- setup/common.sh@19 -- # local var val 00:04:20.539 03:47:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:20.539 03:47:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.539 03:47:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:20.539 03:47:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:20.539 03:47:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.539 03:47:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.539 03:47:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7957444 kB' 'MemUsed: 4281664 kB' 'SwapCached: 0 kB' 'Active: 497736 kB' 'Inactive: 1345276 kB' 'Active(anon): 128580 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 1724916 kB' 'Mapped: 50720 kB' 'AnonPages: 119736 kB' 'Shmem: 10484 kB' 'KernelStack: 6496 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67876 kB' 'Slab: 163820 kB' 'SReclaimable: 67876 kB' 'SUnreclaim: 95944 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # continue 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.539 03:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.539 03:47:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.540 03:47:55 -- setup/common.sh@33 -- # echo 0 00:04:20.540 03:47:55 -- setup/common.sh@33 -- # return 0 00:04:20.540 03:47:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:20.540 03:47:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:20.540 03:47:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:20.540 03:47:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:20.540 node0=1025 expecting 1025 00:04:20.540 03:47:55 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:20.540 03:47:55 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:20.540 00:04:20.540 real 0m0.533s 00:04:20.540 user 0m0.261s 00:04:20.540 sys 0m0.307s 00:04:20.540 03:47:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:20.540 03:47:55 -- common/autotest_common.sh@10 -- # set +x 00:04:20.540 ************************************ 00:04:20.540 END TEST odd_alloc 00:04:20.540 ************************************ 00:04:20.540 03:47:55 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:20.540 03:47:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:20.540 03:47:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:20.540 03:47:55 -- common/autotest_common.sh@10 -- # set +x 00:04:20.540 ************************************ 00:04:20.540 START TEST custom_alloc 00:04:20.540 ************************************ 00:04:20.540 03:47:55 -- common/autotest_common.sh@1114 -- # custom_alloc 00:04:20.540 03:47:55 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:20.540 03:47:55 -- setup/hugepages.sh@169 -- # local node 00:04:20.540 03:47:55 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:20.540 03:47:55 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:20.540 03:47:55 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:20.540 03:47:55 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:20.540 03:47:55 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:20.540 03:47:55 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:20.540 03:47:55 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:20.540 03:47:55 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:20.540 03:47:55 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:20.540 03:47:55 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:20.540 03:47:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:20.540 03:47:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:20.540 03:47:55 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:20.540 03:47:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:20.540 03:47:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:20.540 03:47:55 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:20.540 03:47:55 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:20.540 03:47:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:20.540 03:47:55 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:20.540 03:47:55 -- setup/hugepages.sh@83 -- # : 0 00:04:20.540 03:47:55 -- setup/hugepages.sh@84 -- # : 0 00:04:20.540 03:47:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:20.540 03:47:55 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:20.540 03:47:55 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:20.540 03:47:55 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:20.540 03:47:55 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:20.540 03:47:55 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:20.540 03:47:55 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:20.540 03:47:55 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:20.540 03:47:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:20.540 03:47:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:20.540 03:47:55 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:20.540 03:47:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:20.540 03:47:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:20.540 03:47:55 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:20.540 03:47:55 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:20.540 03:47:55 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:20.540 03:47:55 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:20.540 03:47:55 -- setup/hugepages.sh@78 -- # return 0 00:04:20.540 03:47:55 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:20.540 03:47:55 -- setup/hugepages.sh@187 -- # setup output 00:04:20.540 03:47:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.540 03:47:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:21.110 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:21.110 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:21.110 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:21.110 03:47:56 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:21.110 03:47:56 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:21.110 03:47:56 -- setup/hugepages.sh@89 -- # local node 00:04:21.110 03:47:56 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:21.110 03:47:56 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:21.110 03:47:56 -- setup/hugepages.sh@92 -- # local surp 00:04:21.110 03:47:56 -- setup/hugepages.sh@93 -- # local resv 00:04:21.110 03:47:56 -- setup/hugepages.sh@94 -- # local anon 00:04:21.110 03:47:56 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:21.110 03:47:56 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:21.110 03:47:56 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:21.110 03:47:56 -- setup/common.sh@18 -- # local node= 00:04:21.110 03:47:56 -- setup/common.sh@19 -- # local var val 00:04:21.110 03:47:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:21.111 03:47:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.111 03:47:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.111 03:47:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.111 03:47:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.111 03:47:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.111 03:47:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 9011448 kB' 'MemAvailable: 10522876 kB' 'Buffers: 3448 kB' 'Cached: 1721472 kB' 'SwapCached: 0 kB' 'Active: 498052 kB' 'Inactive: 1345280 kB' 'Active(anon): 128896 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1345280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119776 kB' 'Mapped: 50956 kB' 'Shmem: 10484 kB' 'KReclaimable: 67876 kB' 'Slab: 163780 kB' 'SReclaimable: 67876 kB' 'SUnreclaim: 95904 kB' 'KernelStack: 6504 kB' 'PageTables: 4576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55272 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.111 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.111 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.112 03:47:56 -- setup/common.sh@33 -- # echo 0 00:04:21.112 03:47:56 -- setup/common.sh@33 -- # return 0 00:04:21.112 03:47:56 -- setup/hugepages.sh@97 -- # anon=0 00:04:21.112 03:47:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:21.112 03:47:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.112 03:47:56 -- setup/common.sh@18 -- # local node= 00:04:21.112 03:47:56 -- setup/common.sh@19 -- # local var val 00:04:21.112 03:47:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:21.112 03:47:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.112 03:47:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.112 03:47:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.112 03:47:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.112 03:47:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.112 03:47:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 9011700 kB' 'MemAvailable: 10523128 kB' 'Buffers: 3448 kB' 'Cached: 1721472 kB' 'SwapCached: 0 kB' 'Active: 497748 kB' 'Inactive: 1345280 kB' 'Active(anon): 128592 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1345280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119676 kB' 'Mapped: 50956 kB' 'Shmem: 10484 kB' 'KReclaimable: 67876 kB' 'Slab: 163772 kB' 'SReclaimable: 67876 kB' 'SUnreclaim: 95896 kB' 'KernelStack: 6440 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55240 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.112 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.112 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.113 03:47:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.113 03:47:56 -- setup/common.sh@33 -- # echo 0 00:04:21.113 03:47:56 -- setup/common.sh@33 -- # return 0 00:04:21.113 03:47:56 -- setup/hugepages.sh@99 -- # surp=0 00:04:21.113 03:47:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:21.113 03:47:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:21.113 03:47:56 -- setup/common.sh@18 -- # local node= 00:04:21.113 03:47:56 -- setup/common.sh@19 -- # local var val 00:04:21.113 03:47:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:21.113 03:47:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.113 03:47:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.113 03:47:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.113 03:47:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.113 03:47:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.113 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.114 03:47:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 9011700 kB' 'MemAvailable: 10523128 kB' 'Buffers: 3448 kB' 'Cached: 1721472 kB' 'SwapCached: 0 kB' 'Active: 497928 kB' 'Inactive: 1345280 kB' 'Active(anon): 128772 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1345280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119896 kB' 'Mapped: 50848 kB' 'Shmem: 10484 kB' 'KReclaimable: 67876 kB' 'Slab: 163772 kB' 'SReclaimable: 67876 kB' 'SUnreclaim: 95896 kB' 'KernelStack: 6472 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55240 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.114 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.114 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.115 03:47:56 -- setup/common.sh@33 -- # echo 0 00:04:21.115 03:47:56 -- setup/common.sh@33 -- # return 0 00:04:21.115 03:47:56 -- setup/hugepages.sh@100 -- # resv=0 00:04:21.115 03:47:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:21.115 nr_hugepages=512 00:04:21.115 resv_hugepages=0 00:04:21.115 03:47:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:21.115 surplus_hugepages=0 00:04:21.115 03:47:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:21.115 anon_hugepages=0 00:04:21.115 03:47:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:21.115 03:47:56 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:21.115 03:47:56 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:21.115 03:47:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:21.115 03:47:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:21.115 03:47:56 -- setup/common.sh@18 -- # local node= 00:04:21.115 03:47:56 -- setup/common.sh@19 -- # local var val 00:04:21.115 03:47:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:21.115 03:47:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.115 03:47:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.115 03:47:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.115 03:47:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.115 03:47:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.115 03:47:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 9011700 kB' 'MemAvailable: 10523128 kB' 'Buffers: 3448 kB' 'Cached: 1721472 kB' 'SwapCached: 0 kB' 'Active: 497740 kB' 'Inactive: 1345280 kB' 'Active(anon): 128584 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1345280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119672 kB' 'Mapped: 50720 kB' 'Shmem: 10484 kB' 'KReclaimable: 67876 kB' 'Slab: 163788 kB' 'SReclaimable: 67876 kB' 'SUnreclaim: 95912 kB' 'KernelStack: 6480 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55240 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.115 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.115 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.116 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.116 03:47:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.117 03:47:56 -- setup/common.sh@33 -- # echo 512 00:04:21.117 03:47:56 -- setup/common.sh@33 -- # return 0 00:04:21.117 03:47:56 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:21.117 03:47:56 -- setup/hugepages.sh@112 -- # get_nodes 00:04:21.117 03:47:56 -- setup/hugepages.sh@27 -- # local node 00:04:21.117 03:47:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.117 03:47:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:21.117 03:47:56 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:21.117 03:47:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:21.117 03:47:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:21.117 03:47:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:21.117 03:47:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:21.117 03:47:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.117 03:47:56 -- setup/common.sh@18 -- # local node=0 00:04:21.117 03:47:56 -- setup/common.sh@19 -- # local var val 00:04:21.117 03:47:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:21.117 03:47:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.117 03:47:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:21.117 03:47:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:21.117 03:47:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.117 03:47:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.117 03:47:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 9011700 kB' 'MemUsed: 3227408 kB' 'SwapCached: 0 kB' 'Active: 497744 kB' 'Inactive: 1345280 kB' 'Active(anon): 128588 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1345280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 1724920 kB' 'Mapped: 50720 kB' 'AnonPages: 119720 kB' 'Shmem: 10484 kB' 'KernelStack: 6480 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67876 kB' 'Slab: 163776 kB' 'SReclaimable: 67876 kB' 'SUnreclaim: 95900 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.117 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.117 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.118 03:47:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.118 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.118 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.118 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.118 03:47:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.118 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.118 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.118 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.118 03:47:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.118 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.118 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.118 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.118 03:47:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.118 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.118 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.118 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.118 03:47:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.118 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.118 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.118 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.118 03:47:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.118 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.118 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.118 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.118 03:47:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.118 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.118 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.118 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.118 03:47:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.118 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.118 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.118 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.118 03:47:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.118 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.118 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.118 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.118 03:47:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.118 03:47:56 -- setup/common.sh@33 -- # echo 0 00:04:21.118 03:47:56 -- setup/common.sh@33 -- # return 0 00:04:21.118 03:47:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:21.118 03:47:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:21.118 03:47:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:21.118 03:47:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:21.118 node0=512 expecting 512 00:04:21.118 03:47:56 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:21.118 03:47:56 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:21.118 00:04:21.118 real 0m0.533s 00:04:21.118 user 0m0.282s 00:04:21.118 sys 0m0.286s 00:04:21.118 03:47:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:21.118 03:47:56 -- common/autotest_common.sh@10 -- # set +x 00:04:21.118 ************************************ 00:04:21.118 END TEST custom_alloc 00:04:21.118 ************************************ 00:04:21.118 03:47:56 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:21.118 03:47:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:21.118 03:47:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:21.118 03:47:56 -- common/autotest_common.sh@10 -- # set +x 00:04:21.118 ************************************ 00:04:21.118 START TEST no_shrink_alloc 00:04:21.118 ************************************ 00:04:21.118 03:47:56 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:04:21.118 03:47:56 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:21.118 03:47:56 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:21.118 03:47:56 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:21.118 03:47:56 -- setup/hugepages.sh@51 -- # shift 00:04:21.118 03:47:56 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:21.118 03:47:56 -- setup/hugepages.sh@52 -- # local node_ids 00:04:21.118 03:47:56 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:21.118 03:47:56 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:21.118 03:47:56 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:21.118 03:47:56 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:21.118 03:47:56 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:21.118 03:47:56 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:21.118 03:47:56 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:21.118 03:47:56 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:21.118 03:47:56 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:21.118 03:47:56 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:21.118 03:47:56 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:21.118 03:47:56 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:21.118 03:47:56 -- setup/hugepages.sh@73 -- # return 0 00:04:21.118 03:47:56 -- setup/hugepages.sh@198 -- # setup output 00:04:21.118 03:47:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.118 03:47:56 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:21.690 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:21.690 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:21.690 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:21.690 03:47:56 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:21.690 03:47:56 -- setup/hugepages.sh@89 -- # local node 00:04:21.690 03:47:56 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:21.690 03:47:56 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:21.690 03:47:56 -- setup/hugepages.sh@92 -- # local surp 00:04:21.690 03:47:56 -- setup/hugepages.sh@93 -- # local resv 00:04:21.690 03:47:56 -- setup/hugepages.sh@94 -- # local anon 00:04:21.690 03:47:56 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:21.690 03:47:56 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:21.690 03:47:56 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:21.690 03:47:56 -- setup/common.sh@18 -- # local node= 00:04:21.690 03:47:56 -- setup/common.sh@19 -- # local var val 00:04:21.690 03:47:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:21.690 03:47:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.690 03:47:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.690 03:47:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.690 03:47:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.690 03:47:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.690 03:47:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7960456 kB' 'MemAvailable: 9471884 kB' 'Buffers: 3448 kB' 'Cached: 1721472 kB' 'SwapCached: 0 kB' 'Active: 498040 kB' 'Inactive: 1345280 kB' 'Active(anon): 128884 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1345280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120012 kB' 'Mapped: 50840 kB' 'Shmem: 10484 kB' 'KReclaimable: 67876 kB' 'Slab: 163816 kB' 'SReclaimable: 67876 kB' 'SUnreclaim: 95940 kB' 'KernelStack: 6504 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.690 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.690 03:47:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.691 03:47:56 -- setup/common.sh@33 -- # echo 0 00:04:21.691 03:47:56 -- setup/common.sh@33 -- # return 0 00:04:21.691 03:47:56 -- setup/hugepages.sh@97 -- # anon=0 00:04:21.691 03:47:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:21.691 03:47:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.691 03:47:56 -- setup/common.sh@18 -- # local node= 00:04:21.691 03:47:56 -- setup/common.sh@19 -- # local var val 00:04:21.691 03:47:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:21.691 03:47:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.691 03:47:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.691 03:47:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.691 03:47:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.691 03:47:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.691 03:47:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7961136 kB' 'MemAvailable: 9472564 kB' 'Buffers: 3448 kB' 'Cached: 1721472 kB' 'SwapCached: 0 kB' 'Active: 497788 kB' 'Inactive: 1345280 kB' 'Active(anon): 128632 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1345280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119684 kB' 'Mapped: 50720 kB' 'Shmem: 10484 kB' 'KReclaimable: 67876 kB' 'Slab: 163828 kB' 'SReclaimable: 67876 kB' 'SUnreclaim: 95952 kB' 'KernelStack: 6496 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55240 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.691 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.691 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.692 03:47:56 -- setup/common.sh@33 -- # echo 0 00:04:21.692 03:47:56 -- setup/common.sh@33 -- # return 0 00:04:21.692 03:47:56 -- setup/hugepages.sh@99 -- # surp=0 00:04:21.692 03:47:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:21.692 03:47:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:21.692 03:47:56 -- setup/common.sh@18 -- # local node= 00:04:21.692 03:47:56 -- setup/common.sh@19 -- # local var val 00:04:21.692 03:47:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:21.692 03:47:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.692 03:47:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.692 03:47:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.692 03:47:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.692 03:47:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.692 03:47:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7961136 kB' 'MemAvailable: 9472564 kB' 'Buffers: 3448 kB' 'Cached: 1721472 kB' 'SwapCached: 0 kB' 'Active: 497876 kB' 'Inactive: 1345280 kB' 'Active(anon): 128720 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1345280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119816 kB' 'Mapped: 50720 kB' 'Shmem: 10484 kB' 'KReclaimable: 67876 kB' 'Slab: 163796 kB' 'SReclaimable: 67876 kB' 'SUnreclaim: 95920 kB' 'KernelStack: 6512 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55224 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.692 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.692 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.693 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.693 03:47:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.694 03:47:56 -- setup/common.sh@33 -- # echo 0 00:04:21.694 03:47:56 -- setup/common.sh@33 -- # return 0 00:04:21.694 03:47:56 -- setup/hugepages.sh@100 -- # resv=0 00:04:21.694 nr_hugepages=1024 00:04:21.694 03:47:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:21.694 resv_hugepages=0 00:04:21.694 03:47:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:21.694 surplus_hugepages=0 00:04:21.694 03:47:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:21.694 anon_hugepages=0 00:04:21.694 03:47:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:21.694 03:47:56 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:21.694 03:47:56 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:21.694 03:47:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:21.694 03:47:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:21.694 03:47:56 -- setup/common.sh@18 -- # local node= 00:04:21.694 03:47:56 -- setup/common.sh@19 -- # local var val 00:04:21.694 03:47:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:21.694 03:47:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.694 03:47:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.694 03:47:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.694 03:47:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.694 03:47:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.694 03:47:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7961136 kB' 'MemAvailable: 9472564 kB' 'Buffers: 3448 kB' 'Cached: 1721472 kB' 'SwapCached: 0 kB' 'Active: 497600 kB' 'Inactive: 1345280 kB' 'Active(anon): 128444 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1345280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119540 kB' 'Mapped: 50720 kB' 'Shmem: 10484 kB' 'KReclaimable: 67876 kB' 'Slab: 163792 kB' 'SReclaimable: 67876 kB' 'SUnreclaim: 95916 kB' 'KernelStack: 6496 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55224 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.694 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.694 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.695 03:47:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.695 03:47:56 -- setup/common.sh@33 -- # echo 1024 00:04:21.695 03:47:56 -- setup/common.sh@33 -- # return 0 00:04:21.695 03:47:56 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:21.695 03:47:56 -- setup/hugepages.sh@112 -- # get_nodes 00:04:21.695 03:47:56 -- setup/hugepages.sh@27 -- # local node 00:04:21.695 03:47:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.695 03:47:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:21.695 03:47:56 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:21.695 03:47:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:21.695 03:47:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:21.695 03:47:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:21.695 03:47:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:21.695 03:47:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.695 03:47:56 -- setup/common.sh@18 -- # local node=0 00:04:21.695 03:47:56 -- setup/common.sh@19 -- # local var val 00:04:21.695 03:47:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:21.695 03:47:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.695 03:47:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:21.695 03:47:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:21.695 03:47:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.695 03:47:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.695 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.696 03:47:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7961136 kB' 'MemUsed: 4277972 kB' 'SwapCached: 0 kB' 'Active: 497644 kB' 'Inactive: 1345280 kB' 'Active(anon): 128488 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1345280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 1724920 kB' 'Mapped: 50720 kB' 'AnonPages: 119628 kB' 'Shmem: 10484 kB' 'KernelStack: 6480 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67876 kB' 'Slab: 163784 kB' 'SReclaimable: 67876 kB' 'SUnreclaim: 95908 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # continue 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.696 03:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.696 03:47:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.696 03:47:56 -- setup/common.sh@33 -- # echo 0 00:04:21.696 03:47:56 -- setup/common.sh@33 -- # return 0 00:04:21.696 03:47:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:21.696 03:47:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:21.696 03:47:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:21.696 03:47:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:21.696 node0=1024 expecting 1024 00:04:21.696 03:47:56 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:21.696 03:47:56 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:21.696 03:47:56 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:21.696 03:47:56 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:21.696 03:47:56 -- setup/hugepages.sh@202 -- # setup output 00:04:21.696 03:47:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.696 03:47:56 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:22.267 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:22.267 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:22.267 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:22.267 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:22.267 03:47:57 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:22.267 03:47:57 -- setup/hugepages.sh@89 -- # local node 00:04:22.267 03:47:57 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:22.267 03:47:57 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:22.267 03:47:57 -- setup/hugepages.sh@92 -- # local surp 00:04:22.267 03:47:57 -- setup/hugepages.sh@93 -- # local resv 00:04:22.267 03:47:57 -- setup/hugepages.sh@94 -- # local anon 00:04:22.267 03:47:57 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:22.267 03:47:57 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:22.267 03:47:57 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:22.267 03:47:57 -- setup/common.sh@18 -- # local node= 00:04:22.267 03:47:57 -- setup/common.sh@19 -- # local var val 00:04:22.267 03:47:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.267 03:47:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.267 03:47:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.267 03:47:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.267 03:47:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.267 03:47:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.267 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.267 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.268 03:47:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7962836 kB' 'MemAvailable: 9474264 kB' 'Buffers: 3448 kB' 'Cached: 1721472 kB' 'SwapCached: 0 kB' 'Active: 498260 kB' 'Inactive: 1345280 kB' 'Active(anon): 129104 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1345280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120276 kB' 'Mapped: 50772 kB' 'Shmem: 10484 kB' 'KReclaimable: 67876 kB' 'Slab: 163848 kB' 'SReclaimable: 67876 kB' 'SUnreclaim: 95972 kB' 'KernelStack: 6620 kB' 'PageTables: 4716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55304 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.268 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.268 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.269 03:47:57 -- setup/common.sh@33 -- # echo 0 00:04:22.269 03:47:57 -- setup/common.sh@33 -- # return 0 00:04:22.269 03:47:57 -- setup/hugepages.sh@97 -- # anon=0 00:04:22.269 03:47:57 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:22.269 03:47:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.269 03:47:57 -- setup/common.sh@18 -- # local node= 00:04:22.269 03:47:57 -- setup/common.sh@19 -- # local var val 00:04:22.269 03:47:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.269 03:47:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.269 03:47:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.269 03:47:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.269 03:47:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.269 03:47:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.269 03:47:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7962836 kB' 'MemAvailable: 9474264 kB' 'Buffers: 3448 kB' 'Cached: 1721472 kB' 'SwapCached: 0 kB' 'Active: 498008 kB' 'Inactive: 1345280 kB' 'Active(anon): 128852 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1345280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119976 kB' 'Mapped: 50720 kB' 'Shmem: 10484 kB' 'KReclaimable: 67876 kB' 'Slab: 163836 kB' 'SReclaimable: 67876 kB' 'SUnreclaim: 95960 kB' 'KernelStack: 6520 kB' 'PageTables: 4612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.269 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.269 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.270 03:47:57 -- setup/common.sh@33 -- # echo 0 00:04:22.270 03:47:57 -- setup/common.sh@33 -- # return 0 00:04:22.270 03:47:57 -- setup/hugepages.sh@99 -- # surp=0 00:04:22.270 03:47:57 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:22.270 03:47:57 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:22.270 03:47:57 -- setup/common.sh@18 -- # local node= 00:04:22.270 03:47:57 -- setup/common.sh@19 -- # local var val 00:04:22.270 03:47:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.270 03:47:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.270 03:47:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.270 03:47:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.270 03:47:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.270 03:47:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.270 03:47:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7962836 kB' 'MemAvailable: 9474264 kB' 'Buffers: 3448 kB' 'Cached: 1721472 kB' 'SwapCached: 0 kB' 'Active: 497960 kB' 'Inactive: 1345280 kB' 'Active(anon): 128804 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1345280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119880 kB' 'Mapped: 50720 kB' 'Shmem: 10484 kB' 'KReclaimable: 67876 kB' 'Slab: 163824 kB' 'SReclaimable: 67876 kB' 'SUnreclaim: 95948 kB' 'KernelStack: 6504 kB' 'PageTables: 4564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.270 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.270 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.271 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.271 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.272 03:47:57 -- setup/common.sh@33 -- # echo 0 00:04:22.272 03:47:57 -- setup/common.sh@33 -- # return 0 00:04:22.272 nr_hugepages=1024 00:04:22.272 resv_hugepages=0 00:04:22.272 surplus_hugepages=0 00:04:22.272 anon_hugepages=0 00:04:22.272 03:47:57 -- setup/hugepages.sh@100 -- # resv=0 00:04:22.272 03:47:57 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:22.272 03:47:57 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:22.272 03:47:57 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:22.272 03:47:57 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:22.272 03:47:57 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:22.272 03:47:57 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:22.272 03:47:57 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:22.272 03:47:57 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:22.272 03:47:57 -- setup/common.sh@18 -- # local node= 00:04:22.272 03:47:57 -- setup/common.sh@19 -- # local var val 00:04:22.272 03:47:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.272 03:47:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.272 03:47:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.272 03:47:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.272 03:47:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.272 03:47:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.272 03:47:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7962848 kB' 'MemAvailable: 9474276 kB' 'Buffers: 3448 kB' 'Cached: 1721472 kB' 'SwapCached: 0 kB' 'Active: 498020 kB' 'Inactive: 1345280 kB' 'Active(anon): 128864 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1345280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119976 kB' 'Mapped: 50720 kB' 'Shmem: 10484 kB' 'KReclaimable: 67876 kB' 'Slab: 163824 kB' 'SReclaimable: 67876 kB' 'SUnreclaim: 95948 kB' 'KernelStack: 6520 kB' 'PageTables: 4612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.272 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.272 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 03:47:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.273 03:47:57 -- setup/common.sh@33 -- # echo 1024 00:04:22.273 03:47:57 -- setup/common.sh@33 -- # return 0 00:04:22.273 03:47:57 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:22.273 03:47:57 -- setup/hugepages.sh@112 -- # get_nodes 00:04:22.273 03:47:57 -- setup/hugepages.sh@27 -- # local node 00:04:22.273 03:47:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.273 03:47:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:22.273 03:47:57 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:22.273 03:47:57 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:22.274 03:47:57 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:22.274 03:47:57 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:22.274 03:47:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:22.274 03:47:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.274 03:47:57 -- setup/common.sh@18 -- # local node=0 00:04:22.274 03:47:57 -- setup/common.sh@19 -- # local var val 00:04:22.274 03:47:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.274 03:47:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.274 03:47:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:22.274 03:47:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:22.274 03:47:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.274 03:47:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 03:47:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7962848 kB' 'MemUsed: 4276260 kB' 'SwapCached: 0 kB' 'Active: 497988 kB' 'Inactive: 1345280 kB' 'Active(anon): 128832 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1345280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 1724920 kB' 'Mapped: 50720 kB' 'AnonPages: 119908 kB' 'Shmem: 10484 kB' 'KernelStack: 6504 kB' 'PageTables: 4564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67876 kB' 'Slab: 163808 kB' 'SReclaimable: 67876 kB' 'SUnreclaim: 95932 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.275 03:47:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.275 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.275 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.275 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.275 03:47:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.275 03:47:57 -- setup/common.sh@32 -- # continue 00:04:22.275 03:47:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.275 03:47:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.275 03:47:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.275 03:47:57 -- setup/common.sh@33 -- # echo 0 00:04:22.275 03:47:57 -- setup/common.sh@33 -- # return 0 00:04:22.275 03:47:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:22.275 03:47:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:22.275 03:47:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:22.275 03:47:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:22.275 03:47:57 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:22.275 node0=1024 expecting 1024 00:04:22.275 03:47:57 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:22.275 00:04:22.275 real 0m1.114s 00:04:22.275 user 0m0.561s 00:04:22.275 sys 0m0.587s 00:04:22.275 03:47:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:22.275 03:47:57 -- common/autotest_common.sh@10 -- # set +x 00:04:22.275 ************************************ 00:04:22.275 END TEST no_shrink_alloc 00:04:22.275 ************************************ 00:04:22.275 03:47:57 -- setup/hugepages.sh@217 -- # clear_hp 00:04:22.275 03:47:57 -- setup/hugepages.sh@37 -- # local node hp 00:04:22.275 03:47:57 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:22.275 03:47:57 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:22.275 03:47:57 -- setup/hugepages.sh@41 -- # echo 0 00:04:22.275 03:47:57 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:22.275 03:47:57 -- setup/hugepages.sh@41 -- # echo 0 00:04:22.275 03:47:57 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:22.275 03:47:57 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:22.534 ************************************ 00:04:22.534 END TEST hugepages 00:04:22.534 ************************************ 00:04:22.534 00:04:22.534 real 0m4.884s 00:04:22.534 user 0m2.353s 00:04:22.534 sys 0m2.559s 00:04:22.534 03:47:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:22.534 03:47:57 -- common/autotest_common.sh@10 -- # set +x 00:04:22.534 03:47:57 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:22.534 03:47:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:22.534 03:47:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:22.534 03:47:57 -- common/autotest_common.sh@10 -- # set +x 00:04:22.534 ************************************ 00:04:22.534 START TEST driver 00:04:22.534 ************************************ 00:04:22.534 03:47:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:22.534 * Looking for test storage... 00:04:22.534 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:22.534 03:47:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:22.534 03:47:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:22.534 03:47:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:22.534 03:47:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:22.534 03:47:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:22.534 03:47:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:22.534 03:47:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:22.534 03:47:57 -- scripts/common.sh@335 -- # IFS=.-: 00:04:22.534 03:47:57 -- scripts/common.sh@335 -- # read -ra ver1 00:04:22.534 03:47:57 -- scripts/common.sh@336 -- # IFS=.-: 00:04:22.534 03:47:57 -- scripts/common.sh@336 -- # read -ra ver2 00:04:22.534 03:47:57 -- scripts/common.sh@337 -- # local 'op=<' 00:04:22.534 03:47:57 -- scripts/common.sh@339 -- # ver1_l=2 00:04:22.534 03:47:57 -- scripts/common.sh@340 -- # ver2_l=1 00:04:22.534 03:47:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:22.534 03:47:57 -- scripts/common.sh@343 -- # case "$op" in 00:04:22.534 03:47:57 -- scripts/common.sh@344 -- # : 1 00:04:22.534 03:47:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:22.534 03:47:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:22.534 03:47:57 -- scripts/common.sh@364 -- # decimal 1 00:04:22.534 03:47:57 -- scripts/common.sh@352 -- # local d=1 00:04:22.534 03:47:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:22.534 03:47:57 -- scripts/common.sh@354 -- # echo 1 00:04:22.534 03:47:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:22.534 03:47:57 -- scripts/common.sh@365 -- # decimal 2 00:04:22.534 03:47:57 -- scripts/common.sh@352 -- # local d=2 00:04:22.534 03:47:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:22.534 03:47:57 -- scripts/common.sh@354 -- # echo 2 00:04:22.534 03:47:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:22.534 03:47:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:22.534 03:47:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:22.534 03:47:57 -- scripts/common.sh@367 -- # return 0 00:04:22.534 03:47:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:22.534 03:47:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:22.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.534 --rc genhtml_branch_coverage=1 00:04:22.534 --rc genhtml_function_coverage=1 00:04:22.534 --rc genhtml_legend=1 00:04:22.534 --rc geninfo_all_blocks=1 00:04:22.534 --rc geninfo_unexecuted_blocks=1 00:04:22.534 00:04:22.534 ' 00:04:22.534 03:47:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:22.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.534 --rc genhtml_branch_coverage=1 00:04:22.534 --rc genhtml_function_coverage=1 00:04:22.534 --rc genhtml_legend=1 00:04:22.534 --rc geninfo_all_blocks=1 00:04:22.534 --rc geninfo_unexecuted_blocks=1 00:04:22.534 00:04:22.534 ' 00:04:22.534 03:47:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:22.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.534 --rc genhtml_branch_coverage=1 00:04:22.534 --rc genhtml_function_coverage=1 00:04:22.534 --rc genhtml_legend=1 00:04:22.534 --rc geninfo_all_blocks=1 00:04:22.534 --rc geninfo_unexecuted_blocks=1 00:04:22.534 00:04:22.534 ' 00:04:22.534 03:47:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:22.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.534 --rc genhtml_branch_coverage=1 00:04:22.534 --rc genhtml_function_coverage=1 00:04:22.534 --rc genhtml_legend=1 00:04:22.534 --rc geninfo_all_blocks=1 00:04:22.534 --rc geninfo_unexecuted_blocks=1 00:04:22.534 00:04:22.534 ' 00:04:22.534 03:47:57 -- setup/driver.sh@68 -- # setup reset 00:04:22.534 03:47:57 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:22.534 03:47:57 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:23.101 03:47:58 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:23.101 03:47:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:23.101 03:47:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:23.101 03:47:58 -- common/autotest_common.sh@10 -- # set +x 00:04:23.101 ************************************ 00:04:23.101 START TEST guess_driver 00:04:23.101 ************************************ 00:04:23.101 03:47:58 -- common/autotest_common.sh@1114 -- # guess_driver 00:04:23.101 03:47:58 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:23.101 03:47:58 -- setup/driver.sh@47 -- # local fail=0 00:04:23.101 03:47:58 -- setup/driver.sh@49 -- # pick_driver 00:04:23.101 03:47:58 -- setup/driver.sh@36 -- # vfio 00:04:23.101 03:47:58 -- setup/driver.sh@21 -- # local iommu_grups 00:04:23.101 03:47:58 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:23.101 03:47:58 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:23.101 03:47:58 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:23.101 03:47:58 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:23.101 03:47:58 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:23.101 03:47:58 -- setup/driver.sh@32 -- # return 1 00:04:23.101 03:47:58 -- setup/driver.sh@38 -- # uio 00:04:23.101 03:47:58 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:23.101 03:47:58 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:23.101 03:47:58 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:23.101 03:47:58 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:23.101 03:47:58 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:23.101 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:23.101 03:47:58 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:23.101 03:47:58 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:23.101 03:47:58 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:23.101 03:47:58 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:23.101 Looking for driver=uio_pci_generic 00:04:23.101 03:47:58 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:23.101 03:47:58 -- setup/driver.sh@45 -- # setup output config 00:04:23.101 03:47:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.101 03:47:58 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:23.669 03:47:58 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:23.669 03:47:58 -- setup/driver.sh@58 -- # continue 00:04:23.669 03:47:58 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:23.928 03:47:58 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:23.928 03:47:58 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:23.928 03:47:58 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:23.928 03:47:58 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:23.928 03:47:58 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:23.928 03:47:58 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:23.928 03:47:58 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:23.928 03:47:58 -- setup/driver.sh@65 -- # setup reset 00:04:23.928 03:47:58 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:23.928 03:47:58 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:24.501 ************************************ 00:04:24.501 END TEST guess_driver 00:04:24.501 ************************************ 00:04:24.501 00:04:24.501 real 0m1.402s 00:04:24.501 user 0m0.563s 00:04:24.501 sys 0m0.850s 00:04:24.501 03:47:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:24.501 03:47:59 -- common/autotest_common.sh@10 -- # set +x 00:04:24.501 ************************************ 00:04:24.501 END TEST driver 00:04:24.501 ************************************ 00:04:24.501 00:04:24.501 real 0m2.166s 00:04:24.501 user 0m0.880s 00:04:24.501 sys 0m1.346s 00:04:24.501 03:47:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:24.501 03:47:59 -- common/autotest_common.sh@10 -- # set +x 00:04:24.759 03:47:59 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:24.759 03:47:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:24.759 03:47:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:24.759 03:47:59 -- common/autotest_common.sh@10 -- # set +x 00:04:24.759 ************************************ 00:04:24.759 START TEST devices 00:04:24.759 ************************************ 00:04:24.759 03:47:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:24.759 * Looking for test storage... 00:04:24.759 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:24.759 03:47:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:24.759 03:47:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:24.759 03:47:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:24.759 03:47:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:24.759 03:47:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:24.759 03:47:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:24.759 03:47:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:24.759 03:47:59 -- scripts/common.sh@335 -- # IFS=.-: 00:04:24.759 03:47:59 -- scripts/common.sh@335 -- # read -ra ver1 00:04:24.759 03:47:59 -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.759 03:47:59 -- scripts/common.sh@336 -- # read -ra ver2 00:04:24.759 03:47:59 -- scripts/common.sh@337 -- # local 'op=<' 00:04:24.759 03:47:59 -- scripts/common.sh@339 -- # ver1_l=2 00:04:24.759 03:47:59 -- scripts/common.sh@340 -- # ver2_l=1 00:04:24.759 03:47:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:24.759 03:47:59 -- scripts/common.sh@343 -- # case "$op" in 00:04:24.759 03:47:59 -- scripts/common.sh@344 -- # : 1 00:04:24.759 03:47:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:24.759 03:47:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.759 03:47:59 -- scripts/common.sh@364 -- # decimal 1 00:04:24.759 03:47:59 -- scripts/common.sh@352 -- # local d=1 00:04:24.759 03:47:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.759 03:47:59 -- scripts/common.sh@354 -- # echo 1 00:04:24.759 03:47:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:24.759 03:47:59 -- scripts/common.sh@365 -- # decimal 2 00:04:24.759 03:47:59 -- scripts/common.sh@352 -- # local d=2 00:04:24.759 03:47:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.759 03:47:59 -- scripts/common.sh@354 -- # echo 2 00:04:24.759 03:47:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:24.759 03:47:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:24.759 03:47:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:24.759 03:47:59 -- scripts/common.sh@367 -- # return 0 00:04:24.759 03:47:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.759 03:47:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:24.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.759 --rc genhtml_branch_coverage=1 00:04:24.759 --rc genhtml_function_coverage=1 00:04:24.759 --rc genhtml_legend=1 00:04:24.759 --rc geninfo_all_blocks=1 00:04:24.759 --rc geninfo_unexecuted_blocks=1 00:04:24.759 00:04:24.759 ' 00:04:24.759 03:47:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:24.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.759 --rc genhtml_branch_coverage=1 00:04:24.759 --rc genhtml_function_coverage=1 00:04:24.759 --rc genhtml_legend=1 00:04:24.759 --rc geninfo_all_blocks=1 00:04:24.759 --rc geninfo_unexecuted_blocks=1 00:04:24.759 00:04:24.759 ' 00:04:24.759 03:47:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:24.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.759 --rc genhtml_branch_coverage=1 00:04:24.759 --rc genhtml_function_coverage=1 00:04:24.759 --rc genhtml_legend=1 00:04:24.759 --rc geninfo_all_blocks=1 00:04:24.759 --rc geninfo_unexecuted_blocks=1 00:04:24.759 00:04:24.759 ' 00:04:24.759 03:47:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:24.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.759 --rc genhtml_branch_coverage=1 00:04:24.759 --rc genhtml_function_coverage=1 00:04:24.759 --rc genhtml_legend=1 00:04:24.759 --rc geninfo_all_blocks=1 00:04:24.759 --rc geninfo_unexecuted_blocks=1 00:04:24.759 00:04:24.759 ' 00:04:24.759 03:47:59 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:24.759 03:47:59 -- setup/devices.sh@192 -- # setup reset 00:04:24.759 03:47:59 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:24.759 03:47:59 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:25.693 03:48:00 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:25.693 03:48:00 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:25.693 03:48:00 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:25.693 03:48:00 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:25.693 03:48:00 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:25.693 03:48:00 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:25.693 03:48:00 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:25.693 03:48:00 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:25.693 03:48:00 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:25.693 03:48:00 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:25.694 03:48:00 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:25.694 03:48:00 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:25.694 03:48:00 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:25.694 03:48:00 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:25.694 03:48:00 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:25.694 03:48:00 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:25.694 03:48:00 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:25.694 03:48:00 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:25.694 03:48:00 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:25.694 03:48:00 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:25.694 03:48:00 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:25.694 03:48:00 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:25.694 03:48:00 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:25.694 03:48:00 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:25.694 03:48:00 -- setup/devices.sh@196 -- # blocks=() 00:04:25.694 03:48:00 -- setup/devices.sh@196 -- # declare -a blocks 00:04:25.694 03:48:00 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:25.694 03:48:00 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:25.694 03:48:00 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:25.694 03:48:00 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:25.694 03:48:00 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:25.694 03:48:00 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:25.694 03:48:00 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:25.694 03:48:00 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:25.694 03:48:00 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:25.694 03:48:00 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:25.694 03:48:00 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:25.694 No valid GPT data, bailing 00:04:25.694 03:48:00 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:25.694 03:48:00 -- scripts/common.sh@393 -- # pt= 00:04:25.694 03:48:00 -- scripts/common.sh@394 -- # return 1 00:04:25.694 03:48:00 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:25.694 03:48:00 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:25.694 03:48:00 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:25.694 03:48:00 -- setup/common.sh@80 -- # echo 5368709120 00:04:25.694 03:48:00 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:25.694 03:48:00 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:25.694 03:48:00 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:25.694 03:48:00 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:25.694 03:48:00 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:25.694 03:48:00 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:25.694 03:48:00 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:25.694 03:48:00 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:25.694 03:48:00 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:25.694 03:48:00 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:04:25.694 03:48:00 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:25.694 No valid GPT data, bailing 00:04:25.694 03:48:00 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:25.694 03:48:00 -- scripts/common.sh@393 -- # pt= 00:04:25.694 03:48:00 -- scripts/common.sh@394 -- # return 1 00:04:25.694 03:48:00 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:25.694 03:48:00 -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:25.694 03:48:00 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:25.694 03:48:00 -- setup/common.sh@80 -- # echo 4294967296 00:04:25.694 03:48:00 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:25.694 03:48:00 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:25.694 03:48:00 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:25.694 03:48:00 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:25.694 03:48:00 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:04:25.694 03:48:00 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:25.694 03:48:00 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:25.694 03:48:00 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:25.694 03:48:00 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:04:25.694 03:48:00 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:04:25.694 03:48:00 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:04:25.694 No valid GPT data, bailing 00:04:25.694 03:48:00 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:25.694 03:48:00 -- scripts/common.sh@393 -- # pt= 00:04:25.694 03:48:00 -- scripts/common.sh@394 -- # return 1 00:04:25.694 03:48:00 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:04:25.694 03:48:00 -- setup/common.sh@76 -- # local dev=nvme1n2 00:04:25.694 03:48:00 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:04:25.694 03:48:00 -- setup/common.sh@80 -- # echo 4294967296 00:04:25.694 03:48:00 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:25.694 03:48:00 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:25.694 03:48:00 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:25.694 03:48:00 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:25.694 03:48:00 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:04:25.694 03:48:00 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:25.694 03:48:00 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:25.694 03:48:00 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:25.694 03:48:00 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:04:25.694 03:48:00 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:04:25.694 03:48:00 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:04:25.952 No valid GPT data, bailing 00:04:25.952 03:48:00 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:25.952 03:48:00 -- scripts/common.sh@393 -- # pt= 00:04:25.952 03:48:00 -- scripts/common.sh@394 -- # return 1 00:04:25.952 03:48:00 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:04:25.952 03:48:00 -- setup/common.sh@76 -- # local dev=nvme1n3 00:04:25.952 03:48:00 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:04:25.952 03:48:00 -- setup/common.sh@80 -- # echo 4294967296 00:04:25.952 03:48:00 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:25.952 03:48:00 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:25.952 03:48:00 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:25.952 03:48:00 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:25.952 03:48:00 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:25.952 03:48:00 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:25.952 03:48:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:25.952 03:48:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:25.953 03:48:00 -- common/autotest_common.sh@10 -- # set +x 00:04:25.953 ************************************ 00:04:25.953 START TEST nvme_mount 00:04:25.953 ************************************ 00:04:25.953 03:48:00 -- common/autotest_common.sh@1114 -- # nvme_mount 00:04:25.953 03:48:00 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:25.953 03:48:00 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:25.953 03:48:00 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:25.953 03:48:00 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:25.953 03:48:00 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:25.953 03:48:00 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:25.953 03:48:00 -- setup/common.sh@40 -- # local part_no=1 00:04:25.953 03:48:00 -- setup/common.sh@41 -- # local size=1073741824 00:04:25.953 03:48:00 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:25.953 03:48:00 -- setup/common.sh@44 -- # parts=() 00:04:25.953 03:48:00 -- setup/common.sh@44 -- # local parts 00:04:25.953 03:48:00 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:25.953 03:48:00 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:25.953 03:48:00 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:25.953 03:48:00 -- setup/common.sh@46 -- # (( part++ )) 00:04:25.953 03:48:00 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:25.953 03:48:00 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:25.953 03:48:00 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:25.953 03:48:00 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:26.886 Creating new GPT entries in memory. 00:04:26.886 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:26.886 other utilities. 00:04:26.886 03:48:01 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:26.886 03:48:01 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:26.886 03:48:01 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:26.886 03:48:01 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:26.886 03:48:01 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:27.821 Creating new GPT entries in memory. 00:04:27.821 The operation has completed successfully. 00:04:27.821 03:48:02 -- setup/common.sh@57 -- # (( part++ )) 00:04:27.821 03:48:02 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:27.821 03:48:02 -- setup/common.sh@62 -- # wait 53849 00:04:28.079 03:48:02 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:28.079 03:48:02 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:28.079 03:48:02 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:28.080 03:48:02 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:28.080 03:48:02 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:28.080 03:48:02 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:28.080 03:48:03 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:28.080 03:48:03 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:28.080 03:48:03 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:28.080 03:48:03 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:28.080 03:48:03 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:28.080 03:48:03 -- setup/devices.sh@53 -- # local found=0 00:04:28.080 03:48:03 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:28.080 03:48:03 -- setup/devices.sh@56 -- # : 00:04:28.080 03:48:03 -- setup/devices.sh@59 -- # local pci status 00:04:28.080 03:48:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.080 03:48:03 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:28.080 03:48:03 -- setup/devices.sh@47 -- # setup output config 00:04:28.080 03:48:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.080 03:48:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:28.080 03:48:03 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:28.080 03:48:03 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:28.080 03:48:03 -- setup/devices.sh@63 -- # found=1 00:04:28.080 03:48:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.080 03:48:03 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:28.080 03:48:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.646 03:48:03 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:28.646 03:48:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.646 03:48:03 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:28.646 03:48:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.646 03:48:03 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:28.646 03:48:03 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:28.646 03:48:03 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:28.646 03:48:03 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:28.646 03:48:03 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:28.646 03:48:03 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:28.646 03:48:03 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:28.646 03:48:03 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:28.646 03:48:03 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:28.646 03:48:03 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:28.646 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:28.646 03:48:03 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:28.646 03:48:03 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:28.905 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:28.905 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:28.905 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:28.905 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:28.905 03:48:03 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:28.905 03:48:03 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:28.905 03:48:03 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:28.905 03:48:03 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:28.905 03:48:03 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:28.905 03:48:03 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:28.905 03:48:03 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:28.905 03:48:03 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:28.905 03:48:03 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:28.905 03:48:03 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:28.905 03:48:03 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:28.905 03:48:03 -- setup/devices.sh@53 -- # local found=0 00:04:28.905 03:48:03 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:28.905 03:48:03 -- setup/devices.sh@56 -- # : 00:04:28.905 03:48:03 -- setup/devices.sh@59 -- # local pci status 00:04:28.905 03:48:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.905 03:48:03 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:28.905 03:48:03 -- setup/devices.sh@47 -- # setup output config 00:04:28.905 03:48:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.905 03:48:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:29.164 03:48:04 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:29.164 03:48:04 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:29.164 03:48:04 -- setup/devices.sh@63 -- # found=1 00:04:29.164 03:48:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.164 03:48:04 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:29.164 03:48:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.422 03:48:04 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:29.422 03:48:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.422 03:48:04 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:29.422 03:48:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.681 03:48:04 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:29.681 03:48:04 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:29.681 03:48:04 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:29.681 03:48:04 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:29.681 03:48:04 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:29.681 03:48:04 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:29.681 03:48:04 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:04:29.681 03:48:04 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:29.681 03:48:04 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:29.681 03:48:04 -- setup/devices.sh@50 -- # local mount_point= 00:04:29.681 03:48:04 -- setup/devices.sh@51 -- # local test_file= 00:04:29.681 03:48:04 -- setup/devices.sh@53 -- # local found=0 00:04:29.681 03:48:04 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:29.681 03:48:04 -- setup/devices.sh@59 -- # local pci status 00:04:29.681 03:48:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.681 03:48:04 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:29.681 03:48:04 -- setup/devices.sh@47 -- # setup output config 00:04:29.681 03:48:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.681 03:48:04 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:29.940 03:48:04 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:29.940 03:48:04 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:29.940 03:48:04 -- setup/devices.sh@63 -- # found=1 00:04:29.940 03:48:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.940 03:48:04 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:29.940 03:48:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.199 03:48:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:30.199 03:48:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.199 03:48:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:30.199 03:48:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.199 03:48:05 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:30.199 03:48:05 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:30.199 03:48:05 -- setup/devices.sh@68 -- # return 0 00:04:30.199 03:48:05 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:30.199 03:48:05 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:30.199 03:48:05 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:30.199 03:48:05 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:30.199 03:48:05 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:30.199 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:30.457 00:04:30.457 real 0m4.445s 00:04:30.457 user 0m1.025s 00:04:30.457 sys 0m1.099s 00:04:30.457 03:48:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:30.457 03:48:05 -- common/autotest_common.sh@10 -- # set +x 00:04:30.457 ************************************ 00:04:30.457 END TEST nvme_mount 00:04:30.457 ************************************ 00:04:30.457 03:48:05 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:30.457 03:48:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:30.457 03:48:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:30.457 03:48:05 -- common/autotest_common.sh@10 -- # set +x 00:04:30.457 ************************************ 00:04:30.457 START TEST dm_mount 00:04:30.457 ************************************ 00:04:30.457 03:48:05 -- common/autotest_common.sh@1114 -- # dm_mount 00:04:30.457 03:48:05 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:30.457 03:48:05 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:30.457 03:48:05 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:30.457 03:48:05 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:30.457 03:48:05 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:30.457 03:48:05 -- setup/common.sh@40 -- # local part_no=2 00:04:30.457 03:48:05 -- setup/common.sh@41 -- # local size=1073741824 00:04:30.457 03:48:05 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:30.457 03:48:05 -- setup/common.sh@44 -- # parts=() 00:04:30.457 03:48:05 -- setup/common.sh@44 -- # local parts 00:04:30.457 03:48:05 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:30.457 03:48:05 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:30.457 03:48:05 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:30.457 03:48:05 -- setup/common.sh@46 -- # (( part++ )) 00:04:30.457 03:48:05 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:30.457 03:48:05 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:30.457 03:48:05 -- setup/common.sh@46 -- # (( part++ )) 00:04:30.457 03:48:05 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:30.457 03:48:05 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:30.457 03:48:05 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:30.457 03:48:05 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:31.392 Creating new GPT entries in memory. 00:04:31.392 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:31.392 other utilities. 00:04:31.392 03:48:06 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:31.392 03:48:06 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:31.392 03:48:06 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:31.392 03:48:06 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:31.392 03:48:06 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:32.325 Creating new GPT entries in memory. 00:04:32.326 The operation has completed successfully. 00:04:32.326 03:48:07 -- setup/common.sh@57 -- # (( part++ )) 00:04:32.326 03:48:07 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:32.326 03:48:07 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:32.326 03:48:07 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:32.326 03:48:07 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:33.727 The operation has completed successfully. 00:04:33.727 03:48:08 -- setup/common.sh@57 -- # (( part++ )) 00:04:33.727 03:48:08 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:33.727 03:48:08 -- setup/common.sh@62 -- # wait 54304 00:04:33.727 03:48:08 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:33.727 03:48:08 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:33.727 03:48:08 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:33.727 03:48:08 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:33.727 03:48:08 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:33.727 03:48:08 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:33.727 03:48:08 -- setup/devices.sh@161 -- # break 00:04:33.727 03:48:08 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:33.727 03:48:08 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:33.727 03:48:08 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:33.727 03:48:08 -- setup/devices.sh@166 -- # dm=dm-0 00:04:33.727 03:48:08 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:33.727 03:48:08 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:33.727 03:48:08 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:33.727 03:48:08 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:33.727 03:48:08 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:33.727 03:48:08 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:33.727 03:48:08 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:33.727 03:48:08 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:33.727 03:48:08 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:33.727 03:48:08 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:33.727 03:48:08 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:33.727 03:48:08 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:33.727 03:48:08 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:33.727 03:48:08 -- setup/devices.sh@53 -- # local found=0 00:04:33.727 03:48:08 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:33.727 03:48:08 -- setup/devices.sh@56 -- # : 00:04:33.727 03:48:08 -- setup/devices.sh@59 -- # local pci status 00:04:33.727 03:48:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.727 03:48:08 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:33.727 03:48:08 -- setup/devices.sh@47 -- # setup output config 00:04:33.727 03:48:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.727 03:48:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:33.727 03:48:08 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:33.727 03:48:08 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:33.727 03:48:08 -- setup/devices.sh@63 -- # found=1 00:04:33.727 03:48:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.727 03:48:08 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:33.727 03:48:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.986 03:48:09 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:33.986 03:48:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.244 03:48:09 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:34.244 03:48:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.244 03:48:09 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:34.244 03:48:09 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:34.244 03:48:09 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:34.244 03:48:09 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:34.244 03:48:09 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:34.244 03:48:09 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:34.244 03:48:09 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:34.244 03:48:09 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:34.244 03:48:09 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:34.244 03:48:09 -- setup/devices.sh@50 -- # local mount_point= 00:04:34.244 03:48:09 -- setup/devices.sh@51 -- # local test_file= 00:04:34.244 03:48:09 -- setup/devices.sh@53 -- # local found=0 00:04:34.244 03:48:09 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:34.244 03:48:09 -- setup/devices.sh@59 -- # local pci status 00:04:34.244 03:48:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.244 03:48:09 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:34.244 03:48:09 -- setup/devices.sh@47 -- # setup output config 00:04:34.244 03:48:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.244 03:48:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:34.503 03:48:09 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:34.503 03:48:09 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:34.503 03:48:09 -- setup/devices.sh@63 -- # found=1 00:04:34.503 03:48:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.503 03:48:09 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:34.503 03:48:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.762 03:48:09 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:34.762 03:48:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.762 03:48:09 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:34.762 03:48:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.020 03:48:09 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:35.020 03:48:09 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:35.020 03:48:09 -- setup/devices.sh@68 -- # return 0 00:04:35.020 03:48:09 -- setup/devices.sh@187 -- # cleanup_dm 00:04:35.020 03:48:09 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:35.020 03:48:09 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:35.020 03:48:09 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:35.020 03:48:09 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:35.020 03:48:09 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:35.020 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:35.020 03:48:09 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:35.020 03:48:09 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:35.020 00:04:35.020 real 0m4.581s 00:04:35.020 user 0m0.701s 00:04:35.020 sys 0m0.809s 00:04:35.020 ************************************ 00:04:35.020 END TEST dm_mount 00:04:35.020 ************************************ 00:04:35.020 03:48:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:35.020 03:48:09 -- common/autotest_common.sh@10 -- # set +x 00:04:35.020 03:48:09 -- setup/devices.sh@1 -- # cleanup 00:04:35.020 03:48:09 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:35.020 03:48:09 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:35.020 03:48:09 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:35.020 03:48:09 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:35.020 03:48:09 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:35.020 03:48:09 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:35.279 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:35.279 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:35.279 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:35.279 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:35.279 03:48:10 -- setup/devices.sh@12 -- # cleanup_dm 00:04:35.279 03:48:10 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:35.279 03:48:10 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:35.279 03:48:10 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:35.279 03:48:10 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:35.279 03:48:10 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:35.279 03:48:10 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:35.279 00:04:35.279 real 0m10.643s 00:04:35.279 user 0m2.475s 00:04:35.279 sys 0m2.494s 00:04:35.279 03:48:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:35.279 03:48:10 -- common/autotest_common.sh@10 -- # set +x 00:04:35.279 ************************************ 00:04:35.279 END TEST devices 00:04:35.279 ************************************ 00:04:35.279 00:04:35.279 real 0m22.505s 00:04:35.279 user 0m7.813s 00:04:35.279 sys 0m9.083s 00:04:35.279 03:48:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:35.279 03:48:10 -- common/autotest_common.sh@10 -- # set +x 00:04:35.279 ************************************ 00:04:35.279 END TEST setup.sh 00:04:35.279 ************************************ 00:04:35.279 03:48:10 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:35.537 Hugepages 00:04:35.537 node hugesize free / total 00:04:35.537 node0 1048576kB 0 / 0 00:04:35.537 node0 2048kB 2048 / 2048 00:04:35.537 00:04:35.537 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:35.537 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:35.537 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:35.795 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:35.795 03:48:10 -- spdk/autotest.sh@128 -- # uname -s 00:04:35.795 03:48:10 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:04:35.795 03:48:10 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:04:35.795 03:48:10 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:36.362 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:36.362 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:36.620 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:36.620 03:48:11 -- common/autotest_common.sh@1527 -- # sleep 1 00:04:37.555 03:48:12 -- common/autotest_common.sh@1528 -- # bdfs=() 00:04:37.555 03:48:12 -- common/autotest_common.sh@1528 -- # local bdfs 00:04:37.555 03:48:12 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:04:37.555 03:48:12 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:04:37.555 03:48:12 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:37.555 03:48:12 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:37.555 03:48:12 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:37.555 03:48:12 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:37.555 03:48:12 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:37.555 03:48:12 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:04:37.555 03:48:12 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:04:37.555 03:48:12 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:37.814 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:38.073 Waiting for block devices as requested 00:04:38.073 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:04:38.073 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:04:38.073 03:48:13 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:38.073 03:48:13 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:04:38.073 03:48:13 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:04:38.073 03:48:13 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:38.073 03:48:13 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:04:38.073 03:48:13 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:04:38.073 03:48:13 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:04:38.073 03:48:13 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:04:38.073 03:48:13 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:04:38.073 03:48:13 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:04:38.073 03:48:13 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:38.073 03:48:13 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:38.073 03:48:13 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:38.331 03:48:13 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:38.331 03:48:13 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:38.331 03:48:13 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:38.331 03:48:13 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:04:38.331 03:48:13 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:38.331 03:48:13 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:38.331 03:48:13 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:38.331 03:48:13 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:38.331 03:48:13 -- common/autotest_common.sh@1552 -- # continue 00:04:38.331 03:48:13 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:38.331 03:48:13 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:04:38.331 03:48:13 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:38.332 03:48:13 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:04:38.332 03:48:13 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:04:38.332 03:48:13 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:04:38.332 03:48:13 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:04:38.332 03:48:13 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:04:38.332 03:48:13 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:04:38.332 03:48:13 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:04:38.332 03:48:13 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:38.332 03:48:13 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:38.332 03:48:13 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:38.332 03:48:13 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:38.332 03:48:13 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:38.332 03:48:13 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:38.332 03:48:13 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:04:38.332 03:48:13 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:38.332 03:48:13 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:38.332 03:48:13 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:38.332 03:48:13 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:38.332 03:48:13 -- common/autotest_common.sh@1552 -- # continue 00:04:38.332 03:48:13 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:04:38.332 03:48:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:38.332 03:48:13 -- common/autotest_common.sh@10 -- # set +x 00:04:38.332 03:48:13 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:04:38.332 03:48:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:38.332 03:48:13 -- common/autotest_common.sh@10 -- # set +x 00:04:38.332 03:48:13 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:38.898 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:39.157 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:39.157 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:39.157 03:48:14 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:04:39.157 03:48:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:39.157 03:48:14 -- common/autotest_common.sh@10 -- # set +x 00:04:39.157 03:48:14 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:04:39.157 03:48:14 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:04:39.157 03:48:14 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:04:39.157 03:48:14 -- common/autotest_common.sh@1572 -- # bdfs=() 00:04:39.157 03:48:14 -- common/autotest_common.sh@1572 -- # local bdfs 00:04:39.157 03:48:14 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:04:39.157 03:48:14 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:39.157 03:48:14 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:39.157 03:48:14 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:39.157 03:48:14 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:39.157 03:48:14 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:39.157 03:48:14 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:04:39.157 03:48:14 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:04:39.157 03:48:14 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:39.157 03:48:14 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:04:39.157 03:48:14 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:39.157 03:48:14 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:39.157 03:48:14 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:39.157 03:48:14 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:04:39.157 03:48:14 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:39.157 03:48:14 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:39.157 03:48:14 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:04:39.157 03:48:14 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:04:39.157 03:48:14 -- common/autotest_common.sh@1588 -- # return 0 00:04:39.157 03:48:14 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:04:39.157 03:48:14 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:04:39.157 03:48:14 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:39.157 03:48:14 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:39.157 03:48:14 -- spdk/autotest.sh@160 -- # timing_enter lib 00:04:39.157 03:48:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:39.157 03:48:14 -- common/autotest_common.sh@10 -- # set +x 00:04:39.157 03:48:14 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:39.157 03:48:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:39.157 03:48:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:39.157 03:48:14 -- common/autotest_common.sh@10 -- # set +x 00:04:39.157 ************************************ 00:04:39.157 START TEST env 00:04:39.157 ************************************ 00:04:39.157 03:48:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:39.416 * Looking for test storage... 00:04:39.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:39.416 03:48:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:39.416 03:48:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:39.416 03:48:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:39.416 03:48:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:39.416 03:48:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:39.416 03:48:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:39.416 03:48:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:39.416 03:48:14 -- scripts/common.sh@335 -- # IFS=.-: 00:04:39.416 03:48:14 -- scripts/common.sh@335 -- # read -ra ver1 00:04:39.416 03:48:14 -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.416 03:48:14 -- scripts/common.sh@336 -- # read -ra ver2 00:04:39.416 03:48:14 -- scripts/common.sh@337 -- # local 'op=<' 00:04:39.416 03:48:14 -- scripts/common.sh@339 -- # ver1_l=2 00:04:39.416 03:48:14 -- scripts/common.sh@340 -- # ver2_l=1 00:04:39.416 03:48:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:39.416 03:48:14 -- scripts/common.sh@343 -- # case "$op" in 00:04:39.416 03:48:14 -- scripts/common.sh@344 -- # : 1 00:04:39.416 03:48:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:39.416 03:48:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.416 03:48:14 -- scripts/common.sh@364 -- # decimal 1 00:04:39.416 03:48:14 -- scripts/common.sh@352 -- # local d=1 00:04:39.416 03:48:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.416 03:48:14 -- scripts/common.sh@354 -- # echo 1 00:04:39.416 03:48:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:39.416 03:48:14 -- scripts/common.sh@365 -- # decimal 2 00:04:39.416 03:48:14 -- scripts/common.sh@352 -- # local d=2 00:04:39.416 03:48:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.416 03:48:14 -- scripts/common.sh@354 -- # echo 2 00:04:39.416 03:48:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:39.416 03:48:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:39.416 03:48:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:39.416 03:48:14 -- scripts/common.sh@367 -- # return 0 00:04:39.416 03:48:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.416 03:48:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:39.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.416 --rc genhtml_branch_coverage=1 00:04:39.416 --rc genhtml_function_coverage=1 00:04:39.416 --rc genhtml_legend=1 00:04:39.416 --rc geninfo_all_blocks=1 00:04:39.416 --rc geninfo_unexecuted_blocks=1 00:04:39.416 00:04:39.416 ' 00:04:39.416 03:48:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:39.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.416 --rc genhtml_branch_coverage=1 00:04:39.416 --rc genhtml_function_coverage=1 00:04:39.416 --rc genhtml_legend=1 00:04:39.416 --rc geninfo_all_blocks=1 00:04:39.416 --rc geninfo_unexecuted_blocks=1 00:04:39.416 00:04:39.416 ' 00:04:39.416 03:48:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:39.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.416 --rc genhtml_branch_coverage=1 00:04:39.416 --rc genhtml_function_coverage=1 00:04:39.416 --rc genhtml_legend=1 00:04:39.416 --rc geninfo_all_blocks=1 00:04:39.416 --rc geninfo_unexecuted_blocks=1 00:04:39.416 00:04:39.416 ' 00:04:39.416 03:48:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:39.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.416 --rc genhtml_branch_coverage=1 00:04:39.416 --rc genhtml_function_coverage=1 00:04:39.416 --rc genhtml_legend=1 00:04:39.416 --rc geninfo_all_blocks=1 00:04:39.416 --rc geninfo_unexecuted_blocks=1 00:04:39.416 00:04:39.416 ' 00:04:39.416 03:48:14 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:39.416 03:48:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:39.416 03:48:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:39.416 03:48:14 -- common/autotest_common.sh@10 -- # set +x 00:04:39.416 ************************************ 00:04:39.417 START TEST env_memory 00:04:39.417 ************************************ 00:04:39.417 03:48:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:39.417 00:04:39.417 00:04:39.417 CUnit - A unit testing framework for C - Version 2.1-3 00:04:39.417 http://cunit.sourceforge.net/ 00:04:39.417 00:04:39.417 00:04:39.417 Suite: memory 00:04:39.417 Test: alloc and free memory map ...[2024-11-08 03:48:14.511791] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:39.675 passed 00:04:39.675 Test: mem map translation ...[2024-11-08 03:48:14.543261] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:39.675 [2024-11-08 03:48:14.543344] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:39.675 [2024-11-08 03:48:14.543426] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:39.676 [2024-11-08 03:48:14.543440] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:39.676 passed 00:04:39.676 Test: mem map registration ...[2024-11-08 03:48:14.607279] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:39.676 [2024-11-08 03:48:14.607358] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:39.676 passed 00:04:39.676 Test: mem map adjacent registrations ...passed 00:04:39.676 00:04:39.676 Run Summary: Type Total Ran Passed Failed Inactive 00:04:39.676 suites 1 1 n/a 0 0 00:04:39.676 tests 4 4 4 0 0 00:04:39.676 asserts 152 152 152 0 n/a 00:04:39.676 00:04:39.676 Elapsed time = 0.215 seconds 00:04:39.676 00:04:39.676 real 0m0.238s 00:04:39.676 user 0m0.218s 00:04:39.676 sys 0m0.015s 00:04:39.676 03:48:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:39.676 ************************************ 00:04:39.676 END TEST env_memory 00:04:39.676 03:48:14 -- common/autotest_common.sh@10 -- # set +x 00:04:39.676 ************************************ 00:04:39.676 03:48:14 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:39.676 03:48:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:39.676 03:48:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:39.676 03:48:14 -- common/autotest_common.sh@10 -- # set +x 00:04:39.676 ************************************ 00:04:39.676 START TEST env_vtophys 00:04:39.676 ************************************ 00:04:39.676 03:48:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:39.676 EAL: lib.eal log level changed from notice to debug 00:04:39.676 EAL: Detected lcore 0 as core 0 on socket 0 00:04:39.676 EAL: Detected lcore 1 as core 0 on socket 0 00:04:39.676 EAL: Detected lcore 2 as core 0 on socket 0 00:04:39.676 EAL: Detected lcore 3 as core 0 on socket 0 00:04:39.676 EAL: Detected lcore 4 as core 0 on socket 0 00:04:39.676 EAL: Detected lcore 5 as core 0 on socket 0 00:04:39.676 EAL: Detected lcore 6 as core 0 on socket 0 00:04:39.676 EAL: Detected lcore 7 as core 0 on socket 0 00:04:39.676 EAL: Detected lcore 8 as core 0 on socket 0 00:04:39.676 EAL: Detected lcore 9 as core 0 on socket 0 00:04:39.676 EAL: Maximum logical cores by configuration: 128 00:04:39.676 EAL: Detected CPU lcores: 10 00:04:39.676 EAL: Detected NUMA nodes: 1 00:04:39.676 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:39.676 EAL: Detected shared linkage of DPDK 00:04:39.676 EAL: No shared files mode enabled, IPC will be disabled 00:04:39.676 EAL: Selected IOVA mode 'PA' 00:04:39.935 EAL: Probing VFIO support... 00:04:39.935 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:39.935 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:39.935 EAL: Ask a virtual area of 0x2e000 bytes 00:04:39.935 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:39.935 EAL: Setting up physically contiguous memory... 00:04:39.935 EAL: Setting maximum number of open files to 524288 00:04:39.935 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:39.935 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:39.935 EAL: Ask a virtual area of 0x61000 bytes 00:04:39.935 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:39.935 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:39.935 EAL: Ask a virtual area of 0x400000000 bytes 00:04:39.935 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:39.935 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:39.935 EAL: Ask a virtual area of 0x61000 bytes 00:04:39.935 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:39.935 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:39.935 EAL: Ask a virtual area of 0x400000000 bytes 00:04:39.935 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:39.935 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:39.935 EAL: Ask a virtual area of 0x61000 bytes 00:04:39.935 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:39.935 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:39.935 EAL: Ask a virtual area of 0x400000000 bytes 00:04:39.935 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:39.935 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:39.935 EAL: Ask a virtual area of 0x61000 bytes 00:04:39.935 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:39.935 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:39.935 EAL: Ask a virtual area of 0x400000000 bytes 00:04:39.935 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:39.935 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:39.935 EAL: Hugepages will be freed exactly as allocated. 00:04:39.935 EAL: No shared files mode enabled, IPC is disabled 00:04:39.935 EAL: No shared files mode enabled, IPC is disabled 00:04:39.935 EAL: TSC frequency is ~2200000 KHz 00:04:39.935 EAL: Main lcore 0 is ready (tid=7f6f41dffa00;cpuset=[0]) 00:04:39.935 EAL: Trying to obtain current memory policy. 00:04:39.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.935 EAL: Restoring previous memory policy: 0 00:04:39.935 EAL: request: mp_malloc_sync 00:04:39.935 EAL: No shared files mode enabled, IPC is disabled 00:04:39.935 EAL: Heap on socket 0 was expanded by 2MB 00:04:39.935 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:39.935 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:39.935 EAL: Mem event callback 'spdk:(nil)' registered 00:04:39.935 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:39.935 00:04:39.935 00:04:39.935 CUnit - A unit testing framework for C - Version 2.1-3 00:04:39.935 http://cunit.sourceforge.net/ 00:04:39.935 00:04:39.935 00:04:39.935 Suite: components_suite 00:04:39.935 Test: vtophys_malloc_test ...passed 00:04:39.935 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:39.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.935 EAL: Restoring previous memory policy: 4 00:04:39.935 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.935 EAL: request: mp_malloc_sync 00:04:39.935 EAL: No shared files mode enabled, IPC is disabled 00:04:39.935 EAL: Heap on socket 0 was expanded by 4MB 00:04:39.935 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.935 EAL: request: mp_malloc_sync 00:04:39.935 EAL: No shared files mode enabled, IPC is disabled 00:04:39.935 EAL: Heap on socket 0 was shrunk by 4MB 00:04:39.935 EAL: Trying to obtain current memory policy. 00:04:39.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.935 EAL: Restoring previous memory policy: 4 00:04:39.935 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.935 EAL: request: mp_malloc_sync 00:04:39.935 EAL: No shared files mode enabled, IPC is disabled 00:04:39.935 EAL: Heap on socket 0 was expanded by 6MB 00:04:39.935 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.935 EAL: request: mp_malloc_sync 00:04:39.935 EAL: No shared files mode enabled, IPC is disabled 00:04:39.935 EAL: Heap on socket 0 was shrunk by 6MB 00:04:39.935 EAL: Trying to obtain current memory policy. 00:04:39.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.935 EAL: Restoring previous memory policy: 4 00:04:39.935 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.935 EAL: request: mp_malloc_sync 00:04:39.935 EAL: No shared files mode enabled, IPC is disabled 00:04:39.935 EAL: Heap on socket 0 was expanded by 10MB 00:04:39.935 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.935 EAL: request: mp_malloc_sync 00:04:39.935 EAL: No shared files mode enabled, IPC is disabled 00:04:39.935 EAL: Heap on socket 0 was shrunk by 10MB 00:04:39.935 EAL: Trying to obtain current memory policy. 00:04:39.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.935 EAL: Restoring previous memory policy: 4 00:04:39.935 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.935 EAL: request: mp_malloc_sync 00:04:39.935 EAL: No shared files mode enabled, IPC is disabled 00:04:39.935 EAL: Heap on socket 0 was expanded by 18MB 00:04:39.935 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.935 EAL: request: mp_malloc_sync 00:04:39.935 EAL: No shared files mode enabled, IPC is disabled 00:04:39.935 EAL: Heap on socket 0 was shrunk by 18MB 00:04:39.935 EAL: Trying to obtain current memory policy. 00:04:39.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.935 EAL: Restoring previous memory policy: 4 00:04:39.935 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.935 EAL: request: mp_malloc_sync 00:04:39.935 EAL: No shared files mode enabled, IPC is disabled 00:04:39.935 EAL: Heap on socket 0 was expanded by 34MB 00:04:39.935 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.935 EAL: request: mp_malloc_sync 00:04:39.935 EAL: No shared files mode enabled, IPC is disabled 00:04:39.935 EAL: Heap on socket 0 was shrunk by 34MB 00:04:39.935 EAL: Trying to obtain current memory policy. 00:04:39.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.935 EAL: Restoring previous memory policy: 4 00:04:39.935 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.935 EAL: request: mp_malloc_sync 00:04:39.935 EAL: No shared files mode enabled, IPC is disabled 00:04:39.936 EAL: Heap on socket 0 was expanded by 66MB 00:04:39.936 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.936 EAL: request: mp_malloc_sync 00:04:39.936 EAL: No shared files mode enabled, IPC is disabled 00:04:39.936 EAL: Heap on socket 0 was shrunk by 66MB 00:04:39.936 EAL: Trying to obtain current memory policy. 00:04:39.936 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.936 EAL: Restoring previous memory policy: 4 00:04:39.936 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.936 EAL: request: mp_malloc_sync 00:04:39.936 EAL: No shared files mode enabled, IPC is disabled 00:04:39.936 EAL: Heap on socket 0 was expanded by 130MB 00:04:39.936 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.936 EAL: request: mp_malloc_sync 00:04:39.936 EAL: No shared files mode enabled, IPC is disabled 00:04:39.936 EAL: Heap on socket 0 was shrunk by 130MB 00:04:39.936 EAL: Trying to obtain current memory policy. 00:04:39.936 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:40.194 EAL: Restoring previous memory policy: 4 00:04:40.194 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.194 EAL: request: mp_malloc_sync 00:04:40.194 EAL: No shared files mode enabled, IPC is disabled 00:04:40.194 EAL: Heap on socket 0 was expanded by 258MB 00:04:40.194 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.194 EAL: request: mp_malloc_sync 00:04:40.194 EAL: No shared files mode enabled, IPC is disabled 00:04:40.194 EAL: Heap on socket 0 was shrunk by 258MB 00:04:40.194 EAL: Trying to obtain current memory policy. 00:04:40.194 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:40.453 EAL: Restoring previous memory policy: 4 00:04:40.453 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.453 EAL: request: mp_malloc_sync 00:04:40.453 EAL: No shared files mode enabled, IPC is disabled 00:04:40.453 EAL: Heap on socket 0 was expanded by 514MB 00:04:40.453 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.453 EAL: request: mp_malloc_sync 00:04:40.453 EAL: No shared files mode enabled, IPC is disabled 00:04:40.453 EAL: Heap on socket 0 was shrunk by 514MB 00:04:40.453 EAL: Trying to obtain current memory policy. 00:04:40.453 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:40.717 EAL: Restoring previous memory policy: 4 00:04:40.717 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.717 EAL: request: mp_malloc_sync 00:04:40.717 EAL: No shared files mode enabled, IPC is disabled 00:04:40.717 EAL: Heap on socket 0 was expanded by 1026MB 00:04:40.976 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.235 passed 00:04:41.235 00:04:41.235 Run Summary: Type Total Ran Passed Failed Inactive 00:04:41.235 suites 1 1 n/a 0 0 00:04:41.235 tests 2 2 2 0 0 00:04:41.235 asserts 5253 5253 5253 0 n/a 00:04:41.235 00:04:41.235 Elapsed time = 1.215 seconds 00:04:41.235 EAL: request: mp_malloc_sync 00:04:41.235 EAL: No shared files mode enabled, IPC is disabled 00:04:41.235 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:41.235 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.235 EAL: request: mp_malloc_sync 00:04:41.235 EAL: No shared files mode enabled, IPC is disabled 00:04:41.235 EAL: Heap on socket 0 was shrunk by 2MB 00:04:41.235 EAL: No shared files mode enabled, IPC is disabled 00:04:41.235 EAL: No shared files mode enabled, IPC is disabled 00:04:41.235 EAL: No shared files mode enabled, IPC is disabled 00:04:41.235 00:04:41.235 real 0m1.411s 00:04:41.235 user 0m0.772s 00:04:41.235 sys 0m0.506s 00:04:41.235 03:48:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:41.235 ************************************ 00:04:41.235 END TEST env_vtophys 00:04:41.236 ************************************ 00:04:41.236 03:48:16 -- common/autotest_common.sh@10 -- # set +x 00:04:41.236 03:48:16 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:41.236 03:48:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:41.236 03:48:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.236 03:48:16 -- common/autotest_common.sh@10 -- # set +x 00:04:41.236 ************************************ 00:04:41.236 START TEST env_pci 00:04:41.236 ************************************ 00:04:41.236 03:48:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:41.236 00:04:41.236 00:04:41.236 CUnit - A unit testing framework for C - Version 2.1-3 00:04:41.236 http://cunit.sourceforge.net/ 00:04:41.236 00:04:41.236 00:04:41.236 Suite: pci 00:04:41.236 Test: pci_hook ...[2024-11-08 03:48:16.229853] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 55443 has claimed it 00:04:41.236 passed 00:04:41.236 00:04:41.236 EAL: Cannot find device (10000:00:01.0) 00:04:41.236 EAL: Failed to attach device on primary process 00:04:41.236 Run Summary: Type Total Ran Passed Failed Inactive 00:04:41.236 suites 1 1 n/a 0 0 00:04:41.236 tests 1 1 1 0 0 00:04:41.236 asserts 25 25 25 0 n/a 00:04:41.236 00:04:41.236 Elapsed time = 0.002 seconds 00:04:41.236 00:04:41.236 real 0m0.023s 00:04:41.236 user 0m0.012s 00:04:41.236 sys 0m0.010s 00:04:41.236 03:48:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:41.236 03:48:16 -- common/autotest_common.sh@10 -- # set +x 00:04:41.236 ************************************ 00:04:41.236 END TEST env_pci 00:04:41.236 ************************************ 00:04:41.236 03:48:16 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:41.236 03:48:16 -- env/env.sh@15 -- # uname 00:04:41.236 03:48:16 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:41.236 03:48:16 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:41.236 03:48:16 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:41.236 03:48:16 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:41.236 03:48:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.236 03:48:16 -- common/autotest_common.sh@10 -- # set +x 00:04:41.236 ************************************ 00:04:41.236 START TEST env_dpdk_post_init 00:04:41.236 ************************************ 00:04:41.236 03:48:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:41.236 EAL: Detected CPU lcores: 10 00:04:41.236 EAL: Detected NUMA nodes: 1 00:04:41.236 EAL: Detected shared linkage of DPDK 00:04:41.236 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:41.236 EAL: Selected IOVA mode 'PA' 00:04:41.495 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:41.495 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:04:41.495 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:04:41.495 Starting DPDK initialization... 00:04:41.495 Starting SPDK post initialization... 00:04:41.495 SPDK NVMe probe 00:04:41.495 Attaching to 0000:00:06.0 00:04:41.495 Attaching to 0000:00:07.0 00:04:41.495 Attached to 0000:00:06.0 00:04:41.495 Attached to 0000:00:07.0 00:04:41.495 Cleaning up... 00:04:41.495 00:04:41.495 real 0m0.182s 00:04:41.495 user 0m0.041s 00:04:41.495 sys 0m0.043s 00:04:41.495 03:48:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:41.495 ************************************ 00:04:41.495 END TEST env_dpdk_post_init 00:04:41.495 03:48:16 -- common/autotest_common.sh@10 -- # set +x 00:04:41.495 ************************************ 00:04:41.495 03:48:16 -- env/env.sh@26 -- # uname 00:04:41.495 03:48:16 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:41.495 03:48:16 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:41.495 03:48:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:41.495 03:48:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.495 03:48:16 -- common/autotest_common.sh@10 -- # set +x 00:04:41.495 ************************************ 00:04:41.495 START TEST env_mem_callbacks 00:04:41.495 ************************************ 00:04:41.495 03:48:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:41.495 EAL: Detected CPU lcores: 10 00:04:41.495 EAL: Detected NUMA nodes: 1 00:04:41.495 EAL: Detected shared linkage of DPDK 00:04:41.495 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:41.495 EAL: Selected IOVA mode 'PA' 00:04:41.753 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:41.753 00:04:41.753 00:04:41.753 CUnit - A unit testing framework for C - Version 2.1-3 00:04:41.753 http://cunit.sourceforge.net/ 00:04:41.753 00:04:41.753 00:04:41.753 Suite: memory 00:04:41.753 Test: test ... 00:04:41.753 register 0x200000200000 2097152 00:04:41.753 malloc 3145728 00:04:41.753 register 0x200000400000 4194304 00:04:41.753 buf 0x200000500000 len 3145728 PASSED 00:04:41.753 malloc 64 00:04:41.753 buf 0x2000004fff40 len 64 PASSED 00:04:41.753 malloc 4194304 00:04:41.753 register 0x200000800000 6291456 00:04:41.753 buf 0x200000a00000 len 4194304 PASSED 00:04:41.753 free 0x200000500000 3145728 00:04:41.753 free 0x2000004fff40 64 00:04:41.753 unregister 0x200000400000 4194304 PASSED 00:04:41.753 free 0x200000a00000 4194304 00:04:41.753 unregister 0x200000800000 6291456 PASSED 00:04:41.753 malloc 8388608 00:04:41.753 register 0x200000400000 10485760 00:04:41.753 buf 0x200000600000 len 8388608 PASSED 00:04:41.753 free 0x200000600000 8388608 00:04:41.753 unregister 0x200000400000 10485760 PASSED 00:04:41.753 passed 00:04:41.753 00:04:41.753 Run Summary: Type Total Ran Passed Failed Inactive 00:04:41.753 suites 1 1 n/a 0 0 00:04:41.754 tests 1 1 1 0 0 00:04:41.754 asserts 15 15 15 0 n/a 00:04:41.754 00:04:41.754 Elapsed time = 0.009 seconds 00:04:41.754 00:04:41.754 real 0m0.146s 00:04:41.754 user 0m0.021s 00:04:41.754 sys 0m0.022s 00:04:41.754 03:48:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:41.754 03:48:16 -- common/autotest_common.sh@10 -- # set +x 00:04:41.754 ************************************ 00:04:41.754 END TEST env_mem_callbacks 00:04:41.754 ************************************ 00:04:41.754 00:04:41.754 real 0m2.465s 00:04:41.754 user 0m1.247s 00:04:41.754 sys 0m0.865s 00:04:41.754 ************************************ 00:04:41.754 END TEST env 00:04:41.754 ************************************ 00:04:41.754 03:48:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:41.754 03:48:16 -- common/autotest_common.sh@10 -- # set +x 00:04:41.754 03:48:16 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:41.754 03:48:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:41.754 03:48:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.754 03:48:16 -- common/autotest_common.sh@10 -- # set +x 00:04:41.754 ************************************ 00:04:41.754 START TEST rpc 00:04:41.754 ************************************ 00:04:41.754 03:48:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:41.754 * Looking for test storage... 00:04:41.754 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:41.754 03:48:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:41.754 03:48:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:41.754 03:48:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:42.013 03:48:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:42.013 03:48:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:42.013 03:48:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:42.013 03:48:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:42.013 03:48:16 -- scripts/common.sh@335 -- # IFS=.-: 00:04:42.013 03:48:16 -- scripts/common.sh@335 -- # read -ra ver1 00:04:42.013 03:48:16 -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.013 03:48:16 -- scripts/common.sh@336 -- # read -ra ver2 00:04:42.013 03:48:16 -- scripts/common.sh@337 -- # local 'op=<' 00:04:42.013 03:48:16 -- scripts/common.sh@339 -- # ver1_l=2 00:04:42.013 03:48:16 -- scripts/common.sh@340 -- # ver2_l=1 00:04:42.013 03:48:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:42.013 03:48:16 -- scripts/common.sh@343 -- # case "$op" in 00:04:42.013 03:48:16 -- scripts/common.sh@344 -- # : 1 00:04:42.013 03:48:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:42.013 03:48:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.013 03:48:16 -- scripts/common.sh@364 -- # decimal 1 00:04:42.013 03:48:16 -- scripts/common.sh@352 -- # local d=1 00:04:42.013 03:48:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.013 03:48:16 -- scripts/common.sh@354 -- # echo 1 00:04:42.013 03:48:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:42.013 03:48:16 -- scripts/common.sh@365 -- # decimal 2 00:04:42.013 03:48:16 -- scripts/common.sh@352 -- # local d=2 00:04:42.013 03:48:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.013 03:48:16 -- scripts/common.sh@354 -- # echo 2 00:04:42.013 03:48:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:42.013 03:48:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:42.013 03:48:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:42.013 03:48:16 -- scripts/common.sh@367 -- # return 0 00:04:42.013 03:48:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.013 03:48:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:42.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.013 --rc genhtml_branch_coverage=1 00:04:42.013 --rc genhtml_function_coverage=1 00:04:42.013 --rc genhtml_legend=1 00:04:42.013 --rc geninfo_all_blocks=1 00:04:42.013 --rc geninfo_unexecuted_blocks=1 00:04:42.013 00:04:42.013 ' 00:04:42.013 03:48:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:42.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.013 --rc genhtml_branch_coverage=1 00:04:42.013 --rc genhtml_function_coverage=1 00:04:42.013 --rc genhtml_legend=1 00:04:42.013 --rc geninfo_all_blocks=1 00:04:42.013 --rc geninfo_unexecuted_blocks=1 00:04:42.013 00:04:42.013 ' 00:04:42.013 03:48:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:42.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.013 --rc genhtml_branch_coverage=1 00:04:42.013 --rc genhtml_function_coverage=1 00:04:42.013 --rc genhtml_legend=1 00:04:42.013 --rc geninfo_all_blocks=1 00:04:42.013 --rc geninfo_unexecuted_blocks=1 00:04:42.013 00:04:42.013 ' 00:04:42.013 03:48:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:42.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.013 --rc genhtml_branch_coverage=1 00:04:42.013 --rc genhtml_function_coverage=1 00:04:42.013 --rc genhtml_legend=1 00:04:42.013 --rc geninfo_all_blocks=1 00:04:42.013 --rc geninfo_unexecuted_blocks=1 00:04:42.013 00:04:42.013 ' 00:04:42.013 03:48:16 -- rpc/rpc.sh@65 -- # spdk_pid=55565 00:04:42.013 03:48:16 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:42.013 03:48:16 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.013 03:48:16 -- rpc/rpc.sh@67 -- # waitforlisten 55565 00:04:42.013 03:48:16 -- common/autotest_common.sh@829 -- # '[' -z 55565 ']' 00:04:42.013 03:48:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.013 03:48:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:42.013 03:48:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.013 03:48:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:42.013 03:48:16 -- common/autotest_common.sh@10 -- # set +x 00:04:42.013 [2024-11-08 03:48:17.034228] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:42.013 [2024-11-08 03:48:17.034777] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55565 ] 00:04:42.271 [2024-11-08 03:48:17.177059] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.271 [2024-11-08 03:48:17.278380] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:42.271 [2024-11-08 03:48:17.278809] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:42.271 [2024-11-08 03:48:17.278875] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 55565' to capture a snapshot of events at runtime. 00:04:42.272 [2024-11-08 03:48:17.279110] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid55565 for offline analysis/debug. 00:04:42.272 [2024-11-08 03:48:17.279206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.208 03:48:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:43.208 03:48:18 -- common/autotest_common.sh@862 -- # return 0 00:04:43.208 03:48:18 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:43.208 03:48:18 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:43.208 03:48:18 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:43.208 03:48:18 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:43.208 03:48:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.208 03:48:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.208 03:48:18 -- common/autotest_common.sh@10 -- # set +x 00:04:43.208 ************************************ 00:04:43.208 START TEST rpc_integrity 00:04:43.208 ************************************ 00:04:43.208 03:48:18 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:43.208 03:48:18 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:43.208 03:48:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.208 03:48:18 -- common/autotest_common.sh@10 -- # set +x 00:04:43.208 03:48:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.208 03:48:18 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:43.208 03:48:18 -- rpc/rpc.sh@13 -- # jq length 00:04:43.208 03:48:18 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:43.208 03:48:18 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:43.208 03:48:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.208 03:48:18 -- common/autotest_common.sh@10 -- # set +x 00:04:43.208 03:48:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.208 03:48:18 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:43.208 03:48:18 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:43.208 03:48:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.208 03:48:18 -- common/autotest_common.sh@10 -- # set +x 00:04:43.208 03:48:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.208 03:48:18 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:43.208 { 00:04:43.208 "aliases": [ 00:04:43.208 "fe41c2d1-e1af-4bc0-a7dd-5b3627ba1d2e" 00:04:43.208 ], 00:04:43.208 "assigned_rate_limits": { 00:04:43.208 "r_mbytes_per_sec": 0, 00:04:43.208 "rw_ios_per_sec": 0, 00:04:43.208 "rw_mbytes_per_sec": 0, 00:04:43.208 "w_mbytes_per_sec": 0 00:04:43.208 }, 00:04:43.208 "block_size": 512, 00:04:43.208 "claimed": false, 00:04:43.208 "driver_specific": {}, 00:04:43.208 "memory_domains": [ 00:04:43.208 { 00:04:43.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.208 "dma_device_type": 2 00:04:43.208 } 00:04:43.208 ], 00:04:43.208 "name": "Malloc0", 00:04:43.208 "num_blocks": 16384, 00:04:43.208 "product_name": "Malloc disk", 00:04:43.208 "supported_io_types": { 00:04:43.208 "abort": true, 00:04:43.208 "compare": false, 00:04:43.208 "compare_and_write": false, 00:04:43.208 "flush": true, 00:04:43.208 "nvme_admin": false, 00:04:43.208 "nvme_io": false, 00:04:43.208 "read": true, 00:04:43.208 "reset": true, 00:04:43.208 "unmap": true, 00:04:43.208 "write": true, 00:04:43.208 "write_zeroes": true 00:04:43.208 }, 00:04:43.208 "uuid": "fe41c2d1-e1af-4bc0-a7dd-5b3627ba1d2e", 00:04:43.208 "zoned": false 00:04:43.208 } 00:04:43.208 ]' 00:04:43.208 03:48:18 -- rpc/rpc.sh@17 -- # jq length 00:04:43.208 03:48:18 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:43.208 03:48:18 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:43.208 03:48:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.208 03:48:18 -- common/autotest_common.sh@10 -- # set +x 00:04:43.208 [2024-11-08 03:48:18.207992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:43.208 [2024-11-08 03:48:18.208031] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:43.208 [2024-11-08 03:48:18.208046] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1eab880 00:04:43.208 [2024-11-08 03:48:18.208054] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:43.208 [2024-11-08 03:48:18.209483] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:43.208 [2024-11-08 03:48:18.209514] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:43.208 Passthru0 00:04:43.208 03:48:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.208 03:48:18 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:43.208 03:48:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.208 03:48:18 -- common/autotest_common.sh@10 -- # set +x 00:04:43.208 03:48:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.208 03:48:18 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:43.208 { 00:04:43.208 "aliases": [ 00:04:43.209 "fe41c2d1-e1af-4bc0-a7dd-5b3627ba1d2e" 00:04:43.209 ], 00:04:43.209 "assigned_rate_limits": { 00:04:43.209 "r_mbytes_per_sec": 0, 00:04:43.209 "rw_ios_per_sec": 0, 00:04:43.209 "rw_mbytes_per_sec": 0, 00:04:43.209 "w_mbytes_per_sec": 0 00:04:43.209 }, 00:04:43.209 "block_size": 512, 00:04:43.209 "claim_type": "exclusive_write", 00:04:43.209 "claimed": true, 00:04:43.209 "driver_specific": {}, 00:04:43.209 "memory_domains": [ 00:04:43.209 { 00:04:43.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.209 "dma_device_type": 2 00:04:43.209 } 00:04:43.209 ], 00:04:43.209 "name": "Malloc0", 00:04:43.209 "num_blocks": 16384, 00:04:43.209 "product_name": "Malloc disk", 00:04:43.209 "supported_io_types": { 00:04:43.209 "abort": true, 00:04:43.209 "compare": false, 00:04:43.209 "compare_and_write": false, 00:04:43.209 "flush": true, 00:04:43.209 "nvme_admin": false, 00:04:43.209 "nvme_io": false, 00:04:43.209 "read": true, 00:04:43.209 "reset": true, 00:04:43.209 "unmap": true, 00:04:43.209 "write": true, 00:04:43.209 "write_zeroes": true 00:04:43.209 }, 00:04:43.209 "uuid": "fe41c2d1-e1af-4bc0-a7dd-5b3627ba1d2e", 00:04:43.209 "zoned": false 00:04:43.209 }, 00:04:43.209 { 00:04:43.209 "aliases": [ 00:04:43.209 "3655bc52-9e7b-5901-909b-5660f5dba3ba" 00:04:43.209 ], 00:04:43.209 "assigned_rate_limits": { 00:04:43.209 "r_mbytes_per_sec": 0, 00:04:43.209 "rw_ios_per_sec": 0, 00:04:43.209 "rw_mbytes_per_sec": 0, 00:04:43.209 "w_mbytes_per_sec": 0 00:04:43.209 }, 00:04:43.209 "block_size": 512, 00:04:43.209 "claimed": false, 00:04:43.209 "driver_specific": { 00:04:43.209 "passthru": { 00:04:43.209 "base_bdev_name": "Malloc0", 00:04:43.209 "name": "Passthru0" 00:04:43.209 } 00:04:43.209 }, 00:04:43.209 "memory_domains": [ 00:04:43.209 { 00:04:43.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.209 "dma_device_type": 2 00:04:43.209 } 00:04:43.209 ], 00:04:43.209 "name": "Passthru0", 00:04:43.209 "num_blocks": 16384, 00:04:43.209 "product_name": "passthru", 00:04:43.209 "supported_io_types": { 00:04:43.209 "abort": true, 00:04:43.209 "compare": false, 00:04:43.209 "compare_and_write": false, 00:04:43.209 "flush": true, 00:04:43.209 "nvme_admin": false, 00:04:43.209 "nvme_io": false, 00:04:43.209 "read": true, 00:04:43.209 "reset": true, 00:04:43.209 "unmap": true, 00:04:43.209 "write": true, 00:04:43.209 "write_zeroes": true 00:04:43.209 }, 00:04:43.209 "uuid": "3655bc52-9e7b-5901-909b-5660f5dba3ba", 00:04:43.209 "zoned": false 00:04:43.209 } 00:04:43.209 ]' 00:04:43.209 03:48:18 -- rpc/rpc.sh@21 -- # jq length 00:04:43.209 03:48:18 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:43.209 03:48:18 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:43.209 03:48:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.209 03:48:18 -- common/autotest_common.sh@10 -- # set +x 00:04:43.209 03:48:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.209 03:48:18 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:43.209 03:48:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.209 03:48:18 -- common/autotest_common.sh@10 -- # set +x 00:04:43.209 03:48:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.209 03:48:18 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:43.209 03:48:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.209 03:48:18 -- common/autotest_common.sh@10 -- # set +x 00:04:43.467 03:48:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.467 03:48:18 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:43.467 03:48:18 -- rpc/rpc.sh@26 -- # jq length 00:04:43.467 ************************************ 00:04:43.467 END TEST rpc_integrity 00:04:43.467 ************************************ 00:04:43.467 03:48:18 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:43.467 00:04:43.467 real 0m0.323s 00:04:43.467 user 0m0.205s 00:04:43.467 sys 0m0.042s 00:04:43.467 03:48:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:43.467 03:48:18 -- common/autotest_common.sh@10 -- # set +x 00:04:43.467 03:48:18 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:43.467 03:48:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.467 03:48:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.467 03:48:18 -- common/autotest_common.sh@10 -- # set +x 00:04:43.467 ************************************ 00:04:43.467 START TEST rpc_plugins 00:04:43.467 ************************************ 00:04:43.467 03:48:18 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:04:43.467 03:48:18 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:43.468 03:48:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.468 03:48:18 -- common/autotest_common.sh@10 -- # set +x 00:04:43.468 03:48:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.468 03:48:18 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:43.468 03:48:18 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:43.468 03:48:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.468 03:48:18 -- common/autotest_common.sh@10 -- # set +x 00:04:43.468 03:48:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.468 03:48:18 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:43.468 { 00:04:43.468 "aliases": [ 00:04:43.468 "48f05338-dc08-4db8-b8bf-08505caee6b1" 00:04:43.468 ], 00:04:43.468 "assigned_rate_limits": { 00:04:43.468 "r_mbytes_per_sec": 0, 00:04:43.468 "rw_ios_per_sec": 0, 00:04:43.468 "rw_mbytes_per_sec": 0, 00:04:43.468 "w_mbytes_per_sec": 0 00:04:43.468 }, 00:04:43.468 "block_size": 4096, 00:04:43.468 "claimed": false, 00:04:43.468 "driver_specific": {}, 00:04:43.468 "memory_domains": [ 00:04:43.468 { 00:04:43.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.468 "dma_device_type": 2 00:04:43.468 } 00:04:43.468 ], 00:04:43.468 "name": "Malloc1", 00:04:43.468 "num_blocks": 256, 00:04:43.468 "product_name": "Malloc disk", 00:04:43.468 "supported_io_types": { 00:04:43.468 "abort": true, 00:04:43.468 "compare": false, 00:04:43.468 "compare_and_write": false, 00:04:43.468 "flush": true, 00:04:43.468 "nvme_admin": false, 00:04:43.468 "nvme_io": false, 00:04:43.468 "read": true, 00:04:43.468 "reset": true, 00:04:43.468 "unmap": true, 00:04:43.468 "write": true, 00:04:43.468 "write_zeroes": true 00:04:43.468 }, 00:04:43.468 "uuid": "48f05338-dc08-4db8-b8bf-08505caee6b1", 00:04:43.468 "zoned": false 00:04:43.468 } 00:04:43.468 ]' 00:04:43.468 03:48:18 -- rpc/rpc.sh@32 -- # jq length 00:04:43.468 03:48:18 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:43.468 03:48:18 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:43.468 03:48:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.468 03:48:18 -- common/autotest_common.sh@10 -- # set +x 00:04:43.468 03:48:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.468 03:48:18 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:43.468 03:48:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.468 03:48:18 -- common/autotest_common.sh@10 -- # set +x 00:04:43.468 03:48:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.468 03:48:18 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:43.468 03:48:18 -- rpc/rpc.sh@36 -- # jq length 00:04:43.726 ************************************ 00:04:43.726 END TEST rpc_plugins 00:04:43.726 ************************************ 00:04:43.726 03:48:18 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:43.726 00:04:43.726 real 0m0.157s 00:04:43.726 user 0m0.102s 00:04:43.726 sys 0m0.016s 00:04:43.726 03:48:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:43.726 03:48:18 -- common/autotest_common.sh@10 -- # set +x 00:04:43.727 03:48:18 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:43.727 03:48:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.727 03:48:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.727 03:48:18 -- common/autotest_common.sh@10 -- # set +x 00:04:43.727 ************************************ 00:04:43.727 START TEST rpc_trace_cmd_test 00:04:43.727 ************************************ 00:04:43.727 03:48:18 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:04:43.727 03:48:18 -- rpc/rpc.sh@40 -- # local info 00:04:43.727 03:48:18 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:43.727 03:48:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.727 03:48:18 -- common/autotest_common.sh@10 -- # set +x 00:04:43.727 03:48:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.727 03:48:18 -- rpc/rpc.sh@42 -- # info='{ 00:04:43.727 "bdev": { 00:04:43.727 "mask": "0x8", 00:04:43.727 "tpoint_mask": "0xffffffffffffffff" 00:04:43.727 }, 00:04:43.727 "bdev_nvme": { 00:04:43.727 "mask": "0x4000", 00:04:43.727 "tpoint_mask": "0x0" 00:04:43.727 }, 00:04:43.727 "blobfs": { 00:04:43.727 "mask": "0x80", 00:04:43.727 "tpoint_mask": "0x0" 00:04:43.727 }, 00:04:43.727 "dsa": { 00:04:43.727 "mask": "0x200", 00:04:43.727 "tpoint_mask": "0x0" 00:04:43.727 }, 00:04:43.727 "ftl": { 00:04:43.727 "mask": "0x40", 00:04:43.727 "tpoint_mask": "0x0" 00:04:43.727 }, 00:04:43.727 "iaa": { 00:04:43.727 "mask": "0x1000", 00:04:43.727 "tpoint_mask": "0x0" 00:04:43.727 }, 00:04:43.727 "iscsi_conn": { 00:04:43.727 "mask": "0x2", 00:04:43.727 "tpoint_mask": "0x0" 00:04:43.727 }, 00:04:43.727 "nvme_pcie": { 00:04:43.727 "mask": "0x800", 00:04:43.727 "tpoint_mask": "0x0" 00:04:43.727 }, 00:04:43.727 "nvme_tcp": { 00:04:43.727 "mask": "0x2000", 00:04:43.727 "tpoint_mask": "0x0" 00:04:43.727 }, 00:04:43.727 "nvmf_rdma": { 00:04:43.727 "mask": "0x10", 00:04:43.727 "tpoint_mask": "0x0" 00:04:43.727 }, 00:04:43.727 "nvmf_tcp": { 00:04:43.727 "mask": "0x20", 00:04:43.727 "tpoint_mask": "0x0" 00:04:43.727 }, 00:04:43.727 "scsi": { 00:04:43.727 "mask": "0x4", 00:04:43.727 "tpoint_mask": "0x0" 00:04:43.727 }, 00:04:43.727 "thread": { 00:04:43.727 "mask": "0x400", 00:04:43.727 "tpoint_mask": "0x0" 00:04:43.727 }, 00:04:43.727 "tpoint_group_mask": "0x8", 00:04:43.727 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid55565" 00:04:43.727 }' 00:04:43.727 03:48:18 -- rpc/rpc.sh@43 -- # jq length 00:04:43.727 03:48:18 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:43.727 03:48:18 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:43.727 03:48:18 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:43.727 03:48:18 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:43.727 03:48:18 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:43.727 03:48:18 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:43.986 03:48:18 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:43.986 03:48:18 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:43.986 ************************************ 00:04:43.986 END TEST rpc_trace_cmd_test 00:04:43.986 ************************************ 00:04:43.986 03:48:18 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:43.986 00:04:43.986 real 0m0.274s 00:04:43.986 user 0m0.234s 00:04:43.986 sys 0m0.030s 00:04:43.986 03:48:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:43.986 03:48:18 -- common/autotest_common.sh@10 -- # set +x 00:04:43.986 03:48:18 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:04:43.986 03:48:18 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:04:43.986 03:48:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.986 03:48:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.986 03:48:18 -- common/autotest_common.sh@10 -- # set +x 00:04:43.986 ************************************ 00:04:43.986 START TEST go_rpc 00:04:43.986 ************************************ 00:04:43.986 03:48:18 -- common/autotest_common.sh@1114 -- # go_rpc 00:04:43.986 03:48:18 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:43.986 03:48:18 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:04:43.986 03:48:18 -- rpc/rpc.sh@52 -- # jq length 00:04:43.986 03:48:19 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:04:43.986 03:48:19 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:04:43.986 03:48:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.986 03:48:19 -- common/autotest_common.sh@10 -- # set +x 00:04:43.986 03:48:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.986 03:48:19 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:04:43.986 03:48:19 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:43.986 03:48:19 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["03069fd8-f4cb-4d60-9283-ad00e50570cc"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"03069fd8-f4cb-4d60-9283-ad00e50570cc","zoned":false}]' 00:04:43.986 03:48:19 -- rpc/rpc.sh@57 -- # jq length 00:04:44.244 03:48:19 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:04:44.244 03:48:19 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:44.244 03:48:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.244 03:48:19 -- common/autotest_common.sh@10 -- # set +x 00:04:44.244 03:48:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.244 03:48:19 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:44.244 03:48:19 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:04:44.244 03:48:19 -- rpc/rpc.sh@61 -- # jq length 00:04:44.244 ************************************ 00:04:44.244 END TEST go_rpc 00:04:44.244 ************************************ 00:04:44.244 03:48:19 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:04:44.244 00:04:44.244 real 0m0.214s 00:04:44.244 user 0m0.156s 00:04:44.244 sys 0m0.030s 00:04:44.244 03:48:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:44.244 03:48:19 -- common/autotest_common.sh@10 -- # set +x 00:04:44.244 03:48:19 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:44.245 03:48:19 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:44.245 03:48:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:44.245 03:48:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:44.245 03:48:19 -- common/autotest_common.sh@10 -- # set +x 00:04:44.245 ************************************ 00:04:44.245 START TEST rpc_daemon_integrity 00:04:44.245 ************************************ 00:04:44.245 03:48:19 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:44.245 03:48:19 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:44.245 03:48:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.245 03:48:19 -- common/autotest_common.sh@10 -- # set +x 00:04:44.245 03:48:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.245 03:48:19 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:44.245 03:48:19 -- rpc/rpc.sh@13 -- # jq length 00:04:44.245 03:48:19 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:44.245 03:48:19 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:44.245 03:48:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.245 03:48:19 -- common/autotest_common.sh@10 -- # set +x 00:04:44.245 03:48:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.245 03:48:19 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:04:44.245 03:48:19 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:44.245 03:48:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.245 03:48:19 -- common/autotest_common.sh@10 -- # set +x 00:04:44.245 03:48:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.245 03:48:19 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:44.245 { 00:04:44.245 "aliases": [ 00:04:44.245 "bdc6effd-fe85-4cec-ba50-c4b8a7a2838f" 00:04:44.245 ], 00:04:44.245 "assigned_rate_limits": { 00:04:44.245 "r_mbytes_per_sec": 0, 00:04:44.245 "rw_ios_per_sec": 0, 00:04:44.245 "rw_mbytes_per_sec": 0, 00:04:44.245 "w_mbytes_per_sec": 0 00:04:44.245 }, 00:04:44.245 "block_size": 512, 00:04:44.245 "claimed": false, 00:04:44.245 "driver_specific": {}, 00:04:44.245 "memory_domains": [ 00:04:44.245 { 00:04:44.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.245 "dma_device_type": 2 00:04:44.245 } 00:04:44.245 ], 00:04:44.245 "name": "Malloc3", 00:04:44.245 "num_blocks": 16384, 00:04:44.245 "product_name": "Malloc disk", 00:04:44.245 "supported_io_types": { 00:04:44.245 "abort": true, 00:04:44.245 "compare": false, 00:04:44.245 "compare_and_write": false, 00:04:44.245 "flush": true, 00:04:44.245 "nvme_admin": false, 00:04:44.245 "nvme_io": false, 00:04:44.245 "read": true, 00:04:44.245 "reset": true, 00:04:44.245 "unmap": true, 00:04:44.245 "write": true, 00:04:44.245 "write_zeroes": true 00:04:44.245 }, 00:04:44.245 "uuid": "bdc6effd-fe85-4cec-ba50-c4b8a7a2838f", 00:04:44.245 "zoned": false 00:04:44.245 } 00:04:44.245 ]' 00:04:44.245 03:48:19 -- rpc/rpc.sh@17 -- # jq length 00:04:44.504 03:48:19 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:44.504 03:48:19 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:04:44.504 03:48:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.504 03:48:19 -- common/autotest_common.sh@10 -- # set +x 00:04:44.504 [2024-11-08 03:48:19.392410] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:44.504 [2024-11-08 03:48:19.392486] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:44.504 [2024-11-08 03:48:19.392501] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x209c680 00:04:44.504 [2024-11-08 03:48:19.392510] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:44.504 [2024-11-08 03:48:19.393785] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:44.504 [2024-11-08 03:48:19.393824] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:44.504 Passthru0 00:04:44.504 03:48:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.504 03:48:19 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:44.504 03:48:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.504 03:48:19 -- common/autotest_common.sh@10 -- # set +x 00:04:44.504 03:48:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.504 03:48:19 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:44.504 { 00:04:44.504 "aliases": [ 00:04:44.504 "bdc6effd-fe85-4cec-ba50-c4b8a7a2838f" 00:04:44.504 ], 00:04:44.504 "assigned_rate_limits": { 00:04:44.504 "r_mbytes_per_sec": 0, 00:04:44.504 "rw_ios_per_sec": 0, 00:04:44.504 "rw_mbytes_per_sec": 0, 00:04:44.504 "w_mbytes_per_sec": 0 00:04:44.504 }, 00:04:44.504 "block_size": 512, 00:04:44.504 "claim_type": "exclusive_write", 00:04:44.504 "claimed": true, 00:04:44.504 "driver_specific": {}, 00:04:44.504 "memory_domains": [ 00:04:44.504 { 00:04:44.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.504 "dma_device_type": 2 00:04:44.504 } 00:04:44.504 ], 00:04:44.504 "name": "Malloc3", 00:04:44.504 "num_blocks": 16384, 00:04:44.504 "product_name": "Malloc disk", 00:04:44.504 "supported_io_types": { 00:04:44.504 "abort": true, 00:04:44.504 "compare": false, 00:04:44.504 "compare_and_write": false, 00:04:44.504 "flush": true, 00:04:44.504 "nvme_admin": false, 00:04:44.504 "nvme_io": false, 00:04:44.504 "read": true, 00:04:44.504 "reset": true, 00:04:44.504 "unmap": true, 00:04:44.504 "write": true, 00:04:44.504 "write_zeroes": true 00:04:44.504 }, 00:04:44.504 "uuid": "bdc6effd-fe85-4cec-ba50-c4b8a7a2838f", 00:04:44.504 "zoned": false 00:04:44.504 }, 00:04:44.504 { 00:04:44.504 "aliases": [ 00:04:44.504 "9ebc70bf-6b48-528b-8925-77f2a09c0e2c" 00:04:44.504 ], 00:04:44.504 "assigned_rate_limits": { 00:04:44.504 "r_mbytes_per_sec": 0, 00:04:44.504 "rw_ios_per_sec": 0, 00:04:44.504 "rw_mbytes_per_sec": 0, 00:04:44.504 "w_mbytes_per_sec": 0 00:04:44.504 }, 00:04:44.504 "block_size": 512, 00:04:44.504 "claimed": false, 00:04:44.504 "driver_specific": { 00:04:44.504 "passthru": { 00:04:44.504 "base_bdev_name": "Malloc3", 00:04:44.504 "name": "Passthru0" 00:04:44.504 } 00:04:44.504 }, 00:04:44.504 "memory_domains": [ 00:04:44.504 { 00:04:44.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.504 "dma_device_type": 2 00:04:44.504 } 00:04:44.504 ], 00:04:44.504 "name": "Passthru0", 00:04:44.504 "num_blocks": 16384, 00:04:44.504 "product_name": "passthru", 00:04:44.504 "supported_io_types": { 00:04:44.504 "abort": true, 00:04:44.504 "compare": false, 00:04:44.504 "compare_and_write": false, 00:04:44.504 "flush": true, 00:04:44.504 "nvme_admin": false, 00:04:44.504 "nvme_io": false, 00:04:44.504 "read": true, 00:04:44.504 "reset": true, 00:04:44.504 "unmap": true, 00:04:44.504 "write": true, 00:04:44.504 "write_zeroes": true 00:04:44.504 }, 00:04:44.504 "uuid": "9ebc70bf-6b48-528b-8925-77f2a09c0e2c", 00:04:44.504 "zoned": false 00:04:44.504 } 00:04:44.504 ]' 00:04:44.504 03:48:19 -- rpc/rpc.sh@21 -- # jq length 00:04:44.504 03:48:19 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:44.504 03:48:19 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:44.504 03:48:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.504 03:48:19 -- common/autotest_common.sh@10 -- # set +x 00:04:44.504 03:48:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.504 03:48:19 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:04:44.504 03:48:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.504 03:48:19 -- common/autotest_common.sh@10 -- # set +x 00:04:44.504 03:48:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.504 03:48:19 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:44.504 03:48:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.504 03:48:19 -- common/autotest_common.sh@10 -- # set +x 00:04:44.504 03:48:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.504 03:48:19 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:44.504 03:48:19 -- rpc/rpc.sh@26 -- # jq length 00:04:44.504 ************************************ 00:04:44.504 END TEST rpc_daemon_integrity 00:04:44.504 ************************************ 00:04:44.504 03:48:19 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:44.504 00:04:44.504 real 0m0.318s 00:04:44.504 user 0m0.209s 00:04:44.504 sys 0m0.036s 00:04:44.504 03:48:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:44.504 03:48:19 -- common/autotest_common.sh@10 -- # set +x 00:04:44.504 03:48:19 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:44.504 03:48:19 -- rpc/rpc.sh@84 -- # killprocess 55565 00:04:44.504 03:48:19 -- common/autotest_common.sh@936 -- # '[' -z 55565 ']' 00:04:44.504 03:48:19 -- common/autotest_common.sh@940 -- # kill -0 55565 00:04:44.504 03:48:19 -- common/autotest_common.sh@941 -- # uname 00:04:44.504 03:48:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:44.504 03:48:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55565 00:04:44.763 killing process with pid 55565 00:04:44.763 03:48:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:44.763 03:48:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:44.763 03:48:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55565' 00:04:44.763 03:48:19 -- common/autotest_common.sh@955 -- # kill 55565 00:04:44.763 03:48:19 -- common/autotest_common.sh@960 -- # wait 55565 00:04:45.021 00:04:45.021 real 0m3.263s 00:04:45.021 user 0m4.254s 00:04:45.021 sys 0m0.779s 00:04:45.021 03:48:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:45.021 03:48:20 -- common/autotest_common.sh@10 -- # set +x 00:04:45.021 ************************************ 00:04:45.021 END TEST rpc 00:04:45.021 ************************************ 00:04:45.021 03:48:20 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:45.021 03:48:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:45.021 03:48:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:45.021 03:48:20 -- common/autotest_common.sh@10 -- # set +x 00:04:45.021 ************************************ 00:04:45.021 START TEST rpc_client 00:04:45.021 ************************************ 00:04:45.021 03:48:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:45.280 * Looking for test storage... 00:04:45.280 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:45.280 03:48:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:45.280 03:48:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:45.280 03:48:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:45.280 03:48:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:45.280 03:48:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:45.280 03:48:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:45.280 03:48:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:45.280 03:48:20 -- scripts/common.sh@335 -- # IFS=.-: 00:04:45.280 03:48:20 -- scripts/common.sh@335 -- # read -ra ver1 00:04:45.280 03:48:20 -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.281 03:48:20 -- scripts/common.sh@336 -- # read -ra ver2 00:04:45.281 03:48:20 -- scripts/common.sh@337 -- # local 'op=<' 00:04:45.281 03:48:20 -- scripts/common.sh@339 -- # ver1_l=2 00:04:45.281 03:48:20 -- scripts/common.sh@340 -- # ver2_l=1 00:04:45.281 03:48:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:45.281 03:48:20 -- scripts/common.sh@343 -- # case "$op" in 00:04:45.281 03:48:20 -- scripts/common.sh@344 -- # : 1 00:04:45.281 03:48:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:45.281 03:48:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.281 03:48:20 -- scripts/common.sh@364 -- # decimal 1 00:04:45.281 03:48:20 -- scripts/common.sh@352 -- # local d=1 00:04:45.281 03:48:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.281 03:48:20 -- scripts/common.sh@354 -- # echo 1 00:04:45.281 03:48:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:45.281 03:48:20 -- scripts/common.sh@365 -- # decimal 2 00:04:45.281 03:48:20 -- scripts/common.sh@352 -- # local d=2 00:04:45.281 03:48:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.281 03:48:20 -- scripts/common.sh@354 -- # echo 2 00:04:45.281 03:48:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:45.281 03:48:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:45.281 03:48:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:45.281 03:48:20 -- scripts/common.sh@367 -- # return 0 00:04:45.281 03:48:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.281 03:48:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:45.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.281 --rc genhtml_branch_coverage=1 00:04:45.281 --rc genhtml_function_coverage=1 00:04:45.281 --rc genhtml_legend=1 00:04:45.281 --rc geninfo_all_blocks=1 00:04:45.281 --rc geninfo_unexecuted_blocks=1 00:04:45.281 00:04:45.281 ' 00:04:45.281 03:48:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:45.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.281 --rc genhtml_branch_coverage=1 00:04:45.281 --rc genhtml_function_coverage=1 00:04:45.281 --rc genhtml_legend=1 00:04:45.281 --rc geninfo_all_blocks=1 00:04:45.281 --rc geninfo_unexecuted_blocks=1 00:04:45.281 00:04:45.281 ' 00:04:45.281 03:48:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:45.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.281 --rc genhtml_branch_coverage=1 00:04:45.281 --rc genhtml_function_coverage=1 00:04:45.281 --rc genhtml_legend=1 00:04:45.281 --rc geninfo_all_blocks=1 00:04:45.281 --rc geninfo_unexecuted_blocks=1 00:04:45.281 00:04:45.281 ' 00:04:45.281 03:48:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:45.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.281 --rc genhtml_branch_coverage=1 00:04:45.281 --rc genhtml_function_coverage=1 00:04:45.281 --rc genhtml_legend=1 00:04:45.281 --rc geninfo_all_blocks=1 00:04:45.281 --rc geninfo_unexecuted_blocks=1 00:04:45.281 00:04:45.281 ' 00:04:45.281 03:48:20 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:45.281 OK 00:04:45.281 03:48:20 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:45.281 00:04:45.281 real 0m0.207s 00:04:45.281 user 0m0.118s 00:04:45.281 sys 0m0.103s 00:04:45.281 03:48:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:45.281 03:48:20 -- common/autotest_common.sh@10 -- # set +x 00:04:45.281 ************************************ 00:04:45.281 END TEST rpc_client 00:04:45.281 ************************************ 00:04:45.281 03:48:20 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:45.281 03:48:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:45.281 03:48:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:45.281 03:48:20 -- common/autotest_common.sh@10 -- # set +x 00:04:45.281 ************************************ 00:04:45.281 START TEST json_config 00:04:45.281 ************************************ 00:04:45.281 03:48:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:45.541 03:48:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:45.541 03:48:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:45.541 03:48:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:45.541 03:48:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:45.541 03:48:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:45.541 03:48:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:45.541 03:48:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:45.541 03:48:20 -- scripts/common.sh@335 -- # IFS=.-: 00:04:45.541 03:48:20 -- scripts/common.sh@335 -- # read -ra ver1 00:04:45.541 03:48:20 -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.541 03:48:20 -- scripts/common.sh@336 -- # read -ra ver2 00:04:45.541 03:48:20 -- scripts/common.sh@337 -- # local 'op=<' 00:04:45.541 03:48:20 -- scripts/common.sh@339 -- # ver1_l=2 00:04:45.541 03:48:20 -- scripts/common.sh@340 -- # ver2_l=1 00:04:45.541 03:48:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:45.541 03:48:20 -- scripts/common.sh@343 -- # case "$op" in 00:04:45.541 03:48:20 -- scripts/common.sh@344 -- # : 1 00:04:45.541 03:48:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:45.541 03:48:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.541 03:48:20 -- scripts/common.sh@364 -- # decimal 1 00:04:45.541 03:48:20 -- scripts/common.sh@352 -- # local d=1 00:04:45.541 03:48:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.541 03:48:20 -- scripts/common.sh@354 -- # echo 1 00:04:45.541 03:48:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:45.541 03:48:20 -- scripts/common.sh@365 -- # decimal 2 00:04:45.541 03:48:20 -- scripts/common.sh@352 -- # local d=2 00:04:45.541 03:48:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.541 03:48:20 -- scripts/common.sh@354 -- # echo 2 00:04:45.541 03:48:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:45.541 03:48:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:45.541 03:48:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:45.541 03:48:20 -- scripts/common.sh@367 -- # return 0 00:04:45.541 03:48:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.541 03:48:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:45.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.541 --rc genhtml_branch_coverage=1 00:04:45.541 --rc genhtml_function_coverage=1 00:04:45.541 --rc genhtml_legend=1 00:04:45.541 --rc geninfo_all_blocks=1 00:04:45.541 --rc geninfo_unexecuted_blocks=1 00:04:45.541 00:04:45.541 ' 00:04:45.541 03:48:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:45.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.541 --rc genhtml_branch_coverage=1 00:04:45.541 --rc genhtml_function_coverage=1 00:04:45.541 --rc genhtml_legend=1 00:04:45.541 --rc geninfo_all_blocks=1 00:04:45.541 --rc geninfo_unexecuted_blocks=1 00:04:45.541 00:04:45.541 ' 00:04:45.541 03:48:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:45.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.541 --rc genhtml_branch_coverage=1 00:04:45.541 --rc genhtml_function_coverage=1 00:04:45.541 --rc genhtml_legend=1 00:04:45.541 --rc geninfo_all_blocks=1 00:04:45.541 --rc geninfo_unexecuted_blocks=1 00:04:45.541 00:04:45.541 ' 00:04:45.541 03:48:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:45.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.541 --rc genhtml_branch_coverage=1 00:04:45.541 --rc genhtml_function_coverage=1 00:04:45.541 --rc genhtml_legend=1 00:04:45.541 --rc geninfo_all_blocks=1 00:04:45.541 --rc geninfo_unexecuted_blocks=1 00:04:45.541 00:04:45.541 ' 00:04:45.541 03:48:20 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:45.541 03:48:20 -- nvmf/common.sh@7 -- # uname -s 00:04:45.541 03:48:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:45.541 03:48:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:45.541 03:48:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:45.541 03:48:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:45.541 03:48:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:45.541 03:48:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:45.541 03:48:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:45.541 03:48:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:45.541 03:48:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:45.541 03:48:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:45.541 03:48:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:04:45.541 03:48:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:04:45.541 03:48:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:45.541 03:48:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:45.541 03:48:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:45.541 03:48:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:45.541 03:48:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:45.541 03:48:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:45.541 03:48:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:45.541 03:48:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.541 03:48:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.541 03:48:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.541 03:48:20 -- paths/export.sh@5 -- # export PATH 00:04:45.541 03:48:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.541 03:48:20 -- nvmf/common.sh@46 -- # : 0 00:04:45.541 03:48:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:45.541 03:48:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:45.541 03:48:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:45.541 03:48:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:45.541 03:48:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:45.541 03:48:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:45.541 03:48:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:45.541 03:48:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:45.541 03:48:20 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:45.541 03:48:20 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:45.541 03:48:20 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:45.541 03:48:20 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:45.541 03:48:20 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:04:45.541 03:48:20 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:04:45.541 03:48:20 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:45.541 03:48:20 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:04:45.541 03:48:20 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:45.541 03:48:20 -- json_config/json_config.sh@32 -- # declare -A app_params 00:04:45.541 03:48:20 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:45.541 03:48:20 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:04:45.541 03:48:20 -- json_config/json_config.sh@43 -- # last_event_id=0 00:04:45.541 03:48:20 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:45.541 INFO: JSON configuration test init 00:04:45.541 03:48:20 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:04:45.541 03:48:20 -- json_config/json_config.sh@420 -- # json_config_test_init 00:04:45.541 03:48:20 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:04:45.541 03:48:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:45.541 03:48:20 -- common/autotest_common.sh@10 -- # set +x 00:04:45.541 03:48:20 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:04:45.541 03:48:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:45.541 03:48:20 -- common/autotest_common.sh@10 -- # set +x 00:04:45.541 03:48:20 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:04:45.541 03:48:20 -- json_config/json_config.sh@98 -- # local app=target 00:04:45.541 03:48:20 -- json_config/json_config.sh@99 -- # shift 00:04:45.541 03:48:20 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:45.541 03:48:20 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:45.541 03:48:20 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:45.541 03:48:20 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:45.541 03:48:20 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:45.541 03:48:20 -- json_config/json_config.sh@111 -- # app_pid[$app]=55886 00:04:45.541 Waiting for target to run... 00:04:45.541 03:48:20 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:45.541 03:48:20 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:45.541 03:48:20 -- json_config/json_config.sh@114 -- # waitforlisten 55886 /var/tmp/spdk_tgt.sock 00:04:45.541 03:48:20 -- common/autotest_common.sh@829 -- # '[' -z 55886 ']' 00:04:45.542 03:48:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:45.542 03:48:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:45.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:45.542 03:48:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:45.542 03:48:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:45.542 03:48:20 -- common/autotest_common.sh@10 -- # set +x 00:04:45.542 [2024-11-08 03:48:20.629591] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:45.542 [2024-11-08 03:48:20.629704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55886 ] 00:04:46.109 [2024-11-08 03:48:21.066080] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.109 [2024-11-08 03:48:21.149407] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:46.109 [2024-11-08 03:48:21.149616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.676 03:48:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:46.676 03:48:21 -- common/autotest_common.sh@862 -- # return 0 00:04:46.676 03:48:21 -- json_config/json_config.sh@115 -- # echo '' 00:04:46.676 00:04:46.676 03:48:21 -- json_config/json_config.sh@322 -- # create_accel_config 00:04:46.676 03:48:21 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:04:46.676 03:48:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:46.676 03:48:21 -- common/autotest_common.sh@10 -- # set +x 00:04:46.676 03:48:21 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:04:46.676 03:48:21 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:04:46.676 03:48:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:46.676 03:48:21 -- common/autotest_common.sh@10 -- # set +x 00:04:46.676 03:48:21 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:46.676 03:48:21 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:04:46.676 03:48:21 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:47.243 03:48:22 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:04:47.243 03:48:22 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:04:47.243 03:48:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:47.243 03:48:22 -- common/autotest_common.sh@10 -- # set +x 00:04:47.243 03:48:22 -- json_config/json_config.sh@48 -- # local ret=0 00:04:47.243 03:48:22 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:47.243 03:48:22 -- json_config/json_config.sh@49 -- # local enabled_types 00:04:47.243 03:48:22 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:47.243 03:48:22 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:47.243 03:48:22 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:47.502 03:48:22 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:47.502 03:48:22 -- json_config/json_config.sh@51 -- # local get_types 00:04:47.502 03:48:22 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:47.502 03:48:22 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:04:47.502 03:48:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:47.502 03:48:22 -- common/autotest_common.sh@10 -- # set +x 00:04:47.502 03:48:22 -- json_config/json_config.sh@58 -- # return 0 00:04:47.502 03:48:22 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:04:47.502 03:48:22 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:04:47.502 03:48:22 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:04:47.502 03:48:22 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:04:47.502 03:48:22 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:04:47.502 03:48:22 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:04:47.502 03:48:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:47.502 03:48:22 -- common/autotest_common.sh@10 -- # set +x 00:04:47.502 03:48:22 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:47.502 03:48:22 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:04:47.502 03:48:22 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:04:47.502 03:48:22 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:47.502 03:48:22 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:47.760 MallocForNvmf0 00:04:47.760 03:48:22 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:47.760 03:48:22 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:48.018 MallocForNvmf1 00:04:48.018 03:48:23 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:48.018 03:48:23 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:48.279 [2024-11-08 03:48:23.351075] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:48.279 03:48:23 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:48.279 03:48:23 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:48.551 03:48:23 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:48.552 03:48:23 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:48.810 03:48:23 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:48.810 03:48:23 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:49.069 03:48:24 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:49.069 03:48:24 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:49.328 [2024-11-08 03:48:24.403658] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:49.328 03:48:24 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:04:49.328 03:48:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:49.328 03:48:24 -- common/autotest_common.sh@10 -- # set +x 00:04:49.586 03:48:24 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:04:49.586 03:48:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:49.586 03:48:24 -- common/autotest_common.sh@10 -- # set +x 00:04:49.586 03:48:24 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:04:49.586 03:48:24 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:49.586 03:48:24 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:49.844 MallocBdevForConfigChangeCheck 00:04:49.844 03:48:24 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:04:49.844 03:48:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:49.844 03:48:24 -- common/autotest_common.sh@10 -- # set +x 00:04:49.844 03:48:24 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:04:49.844 03:48:24 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:50.411 INFO: shutting down applications... 00:04:50.411 03:48:25 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:04:50.411 03:48:25 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:04:50.411 03:48:25 -- json_config/json_config.sh@431 -- # json_config_clear target 00:04:50.411 03:48:25 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:04:50.411 03:48:25 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:50.411 Calling clear_iscsi_subsystem 00:04:50.411 Calling clear_nvmf_subsystem 00:04:50.411 Calling clear_nbd_subsystem 00:04:50.411 Calling clear_ublk_subsystem 00:04:50.411 Calling clear_vhost_blk_subsystem 00:04:50.411 Calling clear_vhost_scsi_subsystem 00:04:50.411 Calling clear_scheduler_subsystem 00:04:50.411 Calling clear_bdev_subsystem 00:04:50.411 Calling clear_accel_subsystem 00:04:50.411 Calling clear_vmd_subsystem 00:04:50.411 Calling clear_sock_subsystem 00:04:50.411 Calling clear_iobuf_subsystem 00:04:50.411 03:48:25 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:50.411 03:48:25 -- json_config/json_config.sh@396 -- # count=100 00:04:50.411 03:48:25 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:04:50.411 03:48:25 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:50.411 03:48:25 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:50.411 03:48:25 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:50.978 03:48:25 -- json_config/json_config.sh@398 -- # break 00:04:50.978 03:48:25 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:04:50.978 03:48:25 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:04:50.978 03:48:25 -- json_config/json_config.sh@120 -- # local app=target 00:04:50.978 03:48:25 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:04:50.979 03:48:25 -- json_config/json_config.sh@124 -- # [[ -n 55886 ]] 00:04:50.979 03:48:25 -- json_config/json_config.sh@127 -- # kill -SIGINT 55886 00:04:50.979 03:48:25 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:04:50.979 03:48:25 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:50.979 03:48:25 -- json_config/json_config.sh@130 -- # kill -0 55886 00:04:50.979 03:48:25 -- json_config/json_config.sh@134 -- # sleep 0.5 00:04:51.545 03:48:26 -- json_config/json_config.sh@129 -- # (( i++ )) 00:04:51.545 03:48:26 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:51.545 03:48:26 -- json_config/json_config.sh@130 -- # kill -0 55886 00:04:51.545 03:48:26 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:04:51.545 03:48:26 -- json_config/json_config.sh@132 -- # break 00:04:51.545 03:48:26 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:04:51.545 SPDK target shutdown done 00:04:51.545 03:48:26 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:04:51.545 INFO: relaunching applications... 00:04:51.545 03:48:26 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:04:51.545 03:48:26 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:51.545 03:48:26 -- json_config/json_config.sh@98 -- # local app=target 00:04:51.545 03:48:26 -- json_config/json_config.sh@99 -- # shift 00:04:51.545 03:48:26 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:51.545 03:48:26 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:51.546 03:48:26 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:51.546 03:48:26 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:51.546 03:48:26 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:51.546 03:48:26 -- json_config/json_config.sh@111 -- # app_pid[$app]=56161 00:04:51.546 Waiting for target to run... 00:04:51.546 03:48:26 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:51.546 03:48:26 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:51.546 03:48:26 -- json_config/json_config.sh@114 -- # waitforlisten 56161 /var/tmp/spdk_tgt.sock 00:04:51.546 03:48:26 -- common/autotest_common.sh@829 -- # '[' -z 56161 ']' 00:04:51.546 03:48:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:51.546 03:48:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:51.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:51.546 03:48:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:51.546 03:48:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:51.546 03:48:26 -- common/autotest_common.sh@10 -- # set +x 00:04:51.546 [2024-11-08 03:48:26.447074] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:51.546 [2024-11-08 03:48:26.447176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56161 ] 00:04:51.804 [2024-11-08 03:48:26.870432] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.062 [2024-11-08 03:48:26.947008] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:52.062 [2024-11-08 03:48:26.947142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.321 [2024-11-08 03:48:27.249015] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:52.321 [2024-11-08 03:48:27.281099] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:52.321 03:48:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:52.321 03:48:27 -- common/autotest_common.sh@862 -- # return 0 00:04:52.321 00:04:52.321 03:48:27 -- json_config/json_config.sh@115 -- # echo '' 00:04:52.321 03:48:27 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:04:52.321 INFO: Checking if target configuration is the same... 00:04:52.321 03:48:27 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:52.321 03:48:27 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:52.321 03:48:27 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:04:52.322 03:48:27 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:52.322 + '[' 2 -ne 2 ']' 00:04:52.322 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:52.322 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:52.322 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:52.322 +++ basename /dev/fd/62 00:04:52.322 ++ mktemp /tmp/62.XXX 00:04:52.322 + tmp_file_1=/tmp/62.Tlr 00:04:52.322 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:52.322 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:52.322 + tmp_file_2=/tmp/spdk_tgt_config.json.uqz 00:04:52.322 + ret=0 00:04:52.322 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:52.889 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:52.889 + diff -u /tmp/62.Tlr /tmp/spdk_tgt_config.json.uqz 00:04:52.889 INFO: JSON config files are the same 00:04:52.889 + echo 'INFO: JSON config files are the same' 00:04:52.889 + rm /tmp/62.Tlr /tmp/spdk_tgt_config.json.uqz 00:04:52.889 + exit 0 00:04:52.889 03:48:27 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:04:52.889 INFO: changing configuration and checking if this can be detected... 00:04:52.889 03:48:27 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:52.889 03:48:27 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:52.889 03:48:27 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:53.148 03:48:28 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:53.148 03:48:28 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:04:53.148 03:48:28 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:53.148 + '[' 2 -ne 2 ']' 00:04:53.148 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:53.148 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:53.148 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:53.148 +++ basename /dev/fd/62 00:04:53.148 ++ mktemp /tmp/62.XXX 00:04:53.148 + tmp_file_1=/tmp/62.KiF 00:04:53.148 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:53.148 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:53.148 + tmp_file_2=/tmp/spdk_tgt_config.json.B4Q 00:04:53.148 + ret=0 00:04:53.148 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:53.408 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:53.667 + diff -u /tmp/62.KiF /tmp/spdk_tgt_config.json.B4Q 00:04:53.667 + ret=1 00:04:53.667 + echo '=== Start of file: /tmp/62.KiF ===' 00:04:53.667 + cat /tmp/62.KiF 00:04:53.667 + echo '=== End of file: /tmp/62.KiF ===' 00:04:53.667 + echo '' 00:04:53.667 + echo '=== Start of file: /tmp/spdk_tgt_config.json.B4Q ===' 00:04:53.667 + cat /tmp/spdk_tgt_config.json.B4Q 00:04:53.667 + echo '=== End of file: /tmp/spdk_tgt_config.json.B4Q ===' 00:04:53.667 + echo '' 00:04:53.667 + rm /tmp/62.KiF /tmp/spdk_tgt_config.json.B4Q 00:04:53.667 + exit 1 00:04:53.667 INFO: configuration change detected. 00:04:53.667 03:48:28 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:04:53.667 03:48:28 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:04:53.667 03:48:28 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:04:53.667 03:48:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:53.667 03:48:28 -- common/autotest_common.sh@10 -- # set +x 00:04:53.667 03:48:28 -- json_config/json_config.sh@360 -- # local ret=0 00:04:53.667 03:48:28 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:04:53.667 03:48:28 -- json_config/json_config.sh@370 -- # [[ -n 56161 ]] 00:04:53.667 03:48:28 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:04:53.667 03:48:28 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:04:53.667 03:48:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:53.667 03:48:28 -- common/autotest_common.sh@10 -- # set +x 00:04:53.667 03:48:28 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:04:53.667 03:48:28 -- json_config/json_config.sh@246 -- # uname -s 00:04:53.667 03:48:28 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:04:53.667 03:48:28 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:04:53.667 03:48:28 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:04:53.667 03:48:28 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:04:53.667 03:48:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:53.667 03:48:28 -- common/autotest_common.sh@10 -- # set +x 00:04:53.667 03:48:28 -- json_config/json_config.sh@376 -- # killprocess 56161 00:04:53.667 03:48:28 -- common/autotest_common.sh@936 -- # '[' -z 56161 ']' 00:04:53.667 03:48:28 -- common/autotest_common.sh@940 -- # kill -0 56161 00:04:53.667 03:48:28 -- common/autotest_common.sh@941 -- # uname 00:04:53.667 03:48:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:53.667 03:48:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56161 00:04:53.667 03:48:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:53.667 03:48:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:53.667 killing process with pid 56161 00:04:53.667 03:48:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56161' 00:04:53.667 03:48:28 -- common/autotest_common.sh@955 -- # kill 56161 00:04:53.667 03:48:28 -- common/autotest_common.sh@960 -- # wait 56161 00:04:53.926 03:48:28 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:53.926 03:48:28 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:04:53.926 03:48:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:53.926 03:48:28 -- common/autotest_common.sh@10 -- # set +x 00:04:53.926 03:48:28 -- json_config/json_config.sh@381 -- # return 0 00:04:53.926 INFO: Success 00:04:53.926 03:48:28 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:04:53.926 00:04:53.926 real 0m8.603s 00:04:53.926 user 0m12.244s 00:04:53.926 sys 0m1.867s 00:04:53.926 03:48:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:53.926 03:48:28 -- common/autotest_common.sh@10 -- # set +x 00:04:53.926 ************************************ 00:04:53.926 END TEST json_config 00:04:53.926 ************************************ 00:04:53.926 03:48:29 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:53.926 03:48:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:53.926 03:48:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:53.926 03:48:29 -- common/autotest_common.sh@10 -- # set +x 00:04:53.926 ************************************ 00:04:53.926 START TEST json_config_extra_key 00:04:53.926 ************************************ 00:04:53.926 03:48:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:54.185 03:48:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:54.185 03:48:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:54.185 03:48:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:54.185 03:48:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:54.185 03:48:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:54.185 03:48:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:54.185 03:48:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:54.185 03:48:29 -- scripts/common.sh@335 -- # IFS=.-: 00:04:54.185 03:48:29 -- scripts/common.sh@335 -- # read -ra ver1 00:04:54.185 03:48:29 -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.185 03:48:29 -- scripts/common.sh@336 -- # read -ra ver2 00:04:54.185 03:48:29 -- scripts/common.sh@337 -- # local 'op=<' 00:04:54.185 03:48:29 -- scripts/common.sh@339 -- # ver1_l=2 00:04:54.185 03:48:29 -- scripts/common.sh@340 -- # ver2_l=1 00:04:54.185 03:48:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:54.185 03:48:29 -- scripts/common.sh@343 -- # case "$op" in 00:04:54.185 03:48:29 -- scripts/common.sh@344 -- # : 1 00:04:54.185 03:48:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:54.185 03:48:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.185 03:48:29 -- scripts/common.sh@364 -- # decimal 1 00:04:54.185 03:48:29 -- scripts/common.sh@352 -- # local d=1 00:04:54.185 03:48:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.185 03:48:29 -- scripts/common.sh@354 -- # echo 1 00:04:54.185 03:48:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:54.185 03:48:29 -- scripts/common.sh@365 -- # decimal 2 00:04:54.185 03:48:29 -- scripts/common.sh@352 -- # local d=2 00:04:54.185 03:48:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.185 03:48:29 -- scripts/common.sh@354 -- # echo 2 00:04:54.185 03:48:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:54.185 03:48:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:54.185 03:48:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:54.185 03:48:29 -- scripts/common.sh@367 -- # return 0 00:04:54.185 03:48:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.185 03:48:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:54.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.185 --rc genhtml_branch_coverage=1 00:04:54.185 --rc genhtml_function_coverage=1 00:04:54.185 --rc genhtml_legend=1 00:04:54.185 --rc geninfo_all_blocks=1 00:04:54.185 --rc geninfo_unexecuted_blocks=1 00:04:54.185 00:04:54.185 ' 00:04:54.185 03:48:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:54.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.185 --rc genhtml_branch_coverage=1 00:04:54.185 --rc genhtml_function_coverage=1 00:04:54.185 --rc genhtml_legend=1 00:04:54.185 --rc geninfo_all_blocks=1 00:04:54.185 --rc geninfo_unexecuted_blocks=1 00:04:54.185 00:04:54.185 ' 00:04:54.185 03:48:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:54.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.185 --rc genhtml_branch_coverage=1 00:04:54.185 --rc genhtml_function_coverage=1 00:04:54.185 --rc genhtml_legend=1 00:04:54.185 --rc geninfo_all_blocks=1 00:04:54.185 --rc geninfo_unexecuted_blocks=1 00:04:54.185 00:04:54.185 ' 00:04:54.185 03:48:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:54.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.186 --rc genhtml_branch_coverage=1 00:04:54.186 --rc genhtml_function_coverage=1 00:04:54.186 --rc genhtml_legend=1 00:04:54.186 --rc geninfo_all_blocks=1 00:04:54.186 --rc geninfo_unexecuted_blocks=1 00:04:54.186 00:04:54.186 ' 00:04:54.186 03:48:29 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:54.186 03:48:29 -- nvmf/common.sh@7 -- # uname -s 00:04:54.186 03:48:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:54.186 03:48:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:54.186 03:48:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:54.186 03:48:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:54.186 03:48:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:54.186 03:48:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:54.186 03:48:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:54.186 03:48:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:54.186 03:48:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:54.186 03:48:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:54.186 03:48:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:04:54.186 03:48:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:04:54.186 03:48:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:54.186 03:48:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:54.186 03:48:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:54.186 03:48:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:54.186 03:48:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:54.186 03:48:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:54.186 03:48:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:54.186 03:48:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.186 03:48:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.186 03:48:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.186 03:48:29 -- paths/export.sh@5 -- # export PATH 00:04:54.186 03:48:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.186 03:48:29 -- nvmf/common.sh@46 -- # : 0 00:04:54.186 03:48:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:54.186 03:48:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:54.186 03:48:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:54.186 03:48:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:54.186 03:48:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:54.186 03:48:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:54.186 03:48:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:54.186 03:48:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:54.186 03:48:29 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:54.186 03:48:29 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:54.186 03:48:29 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:54.186 03:48:29 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:54.186 03:48:29 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:54.186 03:48:29 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:54.186 03:48:29 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:54.186 03:48:29 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:54.186 03:48:29 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:54.186 03:48:29 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:54.186 INFO: launching applications... 00:04:54.186 03:48:29 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:54.186 03:48:29 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:54.186 03:48:29 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:54.186 03:48:29 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:54.186 03:48:29 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:54.186 03:48:29 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=56344 00:04:54.186 03:48:29 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:54.186 Waiting for target to run... 00:04:54.186 03:48:29 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:54.186 03:48:29 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 56344 /var/tmp/spdk_tgt.sock 00:04:54.186 03:48:29 -- common/autotest_common.sh@829 -- # '[' -z 56344 ']' 00:04:54.186 03:48:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:54.186 03:48:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:54.186 03:48:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:54.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:54.186 03:48:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:54.186 03:48:29 -- common/autotest_common.sh@10 -- # set +x 00:04:54.186 [2024-11-08 03:48:29.264485] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:54.186 [2024-11-08 03:48:29.265370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56344 ] 00:04:54.753 [2024-11-08 03:48:29.766051] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.012 [2024-11-08 03:48:29.870819] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:55.012 [2024-11-08 03:48:29.871035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.271 00:04:55.271 INFO: shutting down applications... 00:04:55.271 03:48:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:55.271 03:48:30 -- common/autotest_common.sh@862 -- # return 0 00:04:55.271 03:48:30 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:04:55.271 03:48:30 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:04:55.271 03:48:30 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:04:55.271 03:48:30 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:04:55.271 03:48:30 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:04:55.271 03:48:30 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 56344 ]] 00:04:55.271 03:48:30 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 56344 00:04:55.271 03:48:30 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:04:55.271 03:48:30 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:55.271 03:48:30 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56344 00:04:55.271 03:48:30 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:55.838 03:48:30 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:55.838 03:48:30 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:55.838 03:48:30 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56344 00:04:55.838 03:48:30 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:55.838 03:48:30 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:55.838 03:48:30 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:55.838 03:48:30 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:55.838 SPDK target shutdown done 00:04:55.838 03:48:30 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:55.838 Success 00:04:55.838 00:04:55.838 real 0m1.796s 00:04:55.838 user 0m1.637s 00:04:55.838 sys 0m0.558s 00:04:55.838 ************************************ 00:04:55.838 END TEST json_config_extra_key 00:04:55.838 ************************************ 00:04:55.838 03:48:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:55.838 03:48:30 -- common/autotest_common.sh@10 -- # set +x 00:04:55.838 03:48:30 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:55.839 03:48:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:55.839 03:48:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:55.839 03:48:30 -- common/autotest_common.sh@10 -- # set +x 00:04:55.839 ************************************ 00:04:55.839 START TEST alias_rpc 00:04:55.839 ************************************ 00:04:55.839 03:48:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:55.839 * Looking for test storage... 00:04:55.839 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:55.839 03:48:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:56.099 03:48:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:56.099 03:48:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:56.099 03:48:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:56.099 03:48:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:56.099 03:48:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:56.099 03:48:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:56.099 03:48:31 -- scripts/common.sh@335 -- # IFS=.-: 00:04:56.099 03:48:31 -- scripts/common.sh@335 -- # read -ra ver1 00:04:56.099 03:48:31 -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.099 03:48:31 -- scripts/common.sh@336 -- # read -ra ver2 00:04:56.099 03:48:31 -- scripts/common.sh@337 -- # local 'op=<' 00:04:56.099 03:48:31 -- scripts/common.sh@339 -- # ver1_l=2 00:04:56.099 03:48:31 -- scripts/common.sh@340 -- # ver2_l=1 00:04:56.099 03:48:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:56.099 03:48:31 -- scripts/common.sh@343 -- # case "$op" in 00:04:56.099 03:48:31 -- scripts/common.sh@344 -- # : 1 00:04:56.099 03:48:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:56.099 03:48:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.099 03:48:31 -- scripts/common.sh@364 -- # decimal 1 00:04:56.099 03:48:31 -- scripts/common.sh@352 -- # local d=1 00:04:56.099 03:48:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.099 03:48:31 -- scripts/common.sh@354 -- # echo 1 00:04:56.099 03:48:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:56.099 03:48:31 -- scripts/common.sh@365 -- # decimal 2 00:04:56.099 03:48:31 -- scripts/common.sh@352 -- # local d=2 00:04:56.099 03:48:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.099 03:48:31 -- scripts/common.sh@354 -- # echo 2 00:04:56.099 03:48:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:56.099 03:48:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:56.099 03:48:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:56.099 03:48:31 -- scripts/common.sh@367 -- # return 0 00:04:56.099 03:48:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.099 03:48:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:56.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.099 --rc genhtml_branch_coverage=1 00:04:56.099 --rc genhtml_function_coverage=1 00:04:56.099 --rc genhtml_legend=1 00:04:56.099 --rc geninfo_all_blocks=1 00:04:56.099 --rc geninfo_unexecuted_blocks=1 00:04:56.099 00:04:56.099 ' 00:04:56.099 03:48:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:56.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.099 --rc genhtml_branch_coverage=1 00:04:56.099 --rc genhtml_function_coverage=1 00:04:56.099 --rc genhtml_legend=1 00:04:56.099 --rc geninfo_all_blocks=1 00:04:56.099 --rc geninfo_unexecuted_blocks=1 00:04:56.099 00:04:56.099 ' 00:04:56.099 03:48:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:56.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.099 --rc genhtml_branch_coverage=1 00:04:56.099 --rc genhtml_function_coverage=1 00:04:56.099 --rc genhtml_legend=1 00:04:56.099 --rc geninfo_all_blocks=1 00:04:56.099 --rc geninfo_unexecuted_blocks=1 00:04:56.099 00:04:56.099 ' 00:04:56.099 03:48:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:56.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.099 --rc genhtml_branch_coverage=1 00:04:56.099 --rc genhtml_function_coverage=1 00:04:56.099 --rc genhtml_legend=1 00:04:56.099 --rc geninfo_all_blocks=1 00:04:56.099 --rc geninfo_unexecuted_blocks=1 00:04:56.099 00:04:56.099 ' 00:04:56.099 03:48:31 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:56.099 03:48:31 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=56433 00:04:56.099 03:48:31 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 56433 00:04:56.099 03:48:31 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.099 03:48:31 -- common/autotest_common.sh@829 -- # '[' -z 56433 ']' 00:04:56.099 03:48:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.099 03:48:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:56.099 03:48:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.099 03:48:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:56.099 03:48:31 -- common/autotest_common.sh@10 -- # set +x 00:04:56.099 [2024-11-08 03:48:31.133745] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:56.099 [2024-11-08 03:48:31.134443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56433 ] 00:04:56.361 [2024-11-08 03:48:31.275658] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.361 [2024-11-08 03:48:31.389685] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:56.361 [2024-11-08 03:48:31.390179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.298 03:48:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:57.298 03:48:32 -- common/autotest_common.sh@862 -- # return 0 00:04:57.298 03:48:32 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:57.557 03:48:32 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 56433 00:04:57.557 03:48:32 -- common/autotest_common.sh@936 -- # '[' -z 56433 ']' 00:04:57.557 03:48:32 -- common/autotest_common.sh@940 -- # kill -0 56433 00:04:57.557 03:48:32 -- common/autotest_common.sh@941 -- # uname 00:04:57.557 03:48:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:57.557 03:48:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56433 00:04:57.557 03:48:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:57.557 killing process with pid 56433 00:04:57.557 03:48:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:57.557 03:48:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56433' 00:04:57.557 03:48:32 -- common/autotest_common.sh@955 -- # kill 56433 00:04:57.557 03:48:32 -- common/autotest_common.sh@960 -- # wait 56433 00:04:58.124 ************************************ 00:04:58.124 END TEST alias_rpc 00:04:58.124 ************************************ 00:04:58.124 00:04:58.124 real 0m2.064s 00:04:58.124 user 0m2.379s 00:04:58.124 sys 0m0.504s 00:04:58.124 03:48:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:58.124 03:48:32 -- common/autotest_common.sh@10 -- # set +x 00:04:58.124 03:48:32 -- spdk/autotest.sh@169 -- # [[ 1 -eq 0 ]] 00:04:58.124 03:48:32 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:58.124 03:48:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:58.124 03:48:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:58.124 03:48:32 -- common/autotest_common.sh@10 -- # set +x 00:04:58.124 ************************************ 00:04:58.124 START TEST dpdk_mem_utility 00:04:58.124 ************************************ 00:04:58.124 03:48:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:58.124 * Looking for test storage... 00:04:58.124 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:58.124 03:48:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:58.124 03:48:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:58.124 03:48:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:58.124 03:48:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:58.124 03:48:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:58.124 03:48:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:58.124 03:48:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:58.124 03:48:33 -- scripts/common.sh@335 -- # IFS=.-: 00:04:58.124 03:48:33 -- scripts/common.sh@335 -- # read -ra ver1 00:04:58.124 03:48:33 -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.124 03:48:33 -- scripts/common.sh@336 -- # read -ra ver2 00:04:58.124 03:48:33 -- scripts/common.sh@337 -- # local 'op=<' 00:04:58.124 03:48:33 -- scripts/common.sh@339 -- # ver1_l=2 00:04:58.124 03:48:33 -- scripts/common.sh@340 -- # ver2_l=1 00:04:58.124 03:48:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:58.124 03:48:33 -- scripts/common.sh@343 -- # case "$op" in 00:04:58.124 03:48:33 -- scripts/common.sh@344 -- # : 1 00:04:58.124 03:48:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:58.124 03:48:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.124 03:48:33 -- scripts/common.sh@364 -- # decimal 1 00:04:58.124 03:48:33 -- scripts/common.sh@352 -- # local d=1 00:04:58.124 03:48:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.124 03:48:33 -- scripts/common.sh@354 -- # echo 1 00:04:58.124 03:48:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:58.124 03:48:33 -- scripts/common.sh@365 -- # decimal 2 00:04:58.124 03:48:33 -- scripts/common.sh@352 -- # local d=2 00:04:58.124 03:48:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.124 03:48:33 -- scripts/common.sh@354 -- # echo 2 00:04:58.124 03:48:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:58.124 03:48:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:58.124 03:48:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:58.124 03:48:33 -- scripts/common.sh@367 -- # return 0 00:04:58.124 03:48:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.124 03:48:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:58.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.124 --rc genhtml_branch_coverage=1 00:04:58.124 --rc genhtml_function_coverage=1 00:04:58.124 --rc genhtml_legend=1 00:04:58.124 --rc geninfo_all_blocks=1 00:04:58.124 --rc geninfo_unexecuted_blocks=1 00:04:58.124 00:04:58.124 ' 00:04:58.124 03:48:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:58.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.124 --rc genhtml_branch_coverage=1 00:04:58.124 --rc genhtml_function_coverage=1 00:04:58.124 --rc genhtml_legend=1 00:04:58.124 --rc geninfo_all_blocks=1 00:04:58.124 --rc geninfo_unexecuted_blocks=1 00:04:58.125 00:04:58.125 ' 00:04:58.125 03:48:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:58.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.125 --rc genhtml_branch_coverage=1 00:04:58.125 --rc genhtml_function_coverage=1 00:04:58.125 --rc genhtml_legend=1 00:04:58.125 --rc geninfo_all_blocks=1 00:04:58.125 --rc geninfo_unexecuted_blocks=1 00:04:58.125 00:04:58.125 ' 00:04:58.125 03:48:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:58.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.125 --rc genhtml_branch_coverage=1 00:04:58.125 --rc genhtml_function_coverage=1 00:04:58.125 --rc genhtml_legend=1 00:04:58.125 --rc geninfo_all_blocks=1 00:04:58.125 --rc geninfo_unexecuted_blocks=1 00:04:58.125 00:04:58.125 ' 00:04:58.125 03:48:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:58.125 03:48:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=56532 00:04:58.125 03:48:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.125 03:48:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 56532 00:04:58.125 03:48:33 -- common/autotest_common.sh@829 -- # '[' -z 56532 ']' 00:04:58.125 03:48:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.125 03:48:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:58.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.125 03:48:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.125 03:48:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:58.125 03:48:33 -- common/autotest_common.sh@10 -- # set +x 00:04:58.125 [2024-11-08 03:48:33.209219] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:58.125 [2024-11-08 03:48:33.209337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56532 ] 00:04:58.383 [2024-11-08 03:48:33.340223] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.383 [2024-11-08 03:48:33.455044] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:58.383 [2024-11-08 03:48:33.455248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.322 03:48:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:59.322 03:48:34 -- common/autotest_common.sh@862 -- # return 0 00:04:59.322 03:48:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:59.322 03:48:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:59.322 03:48:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.322 03:48:34 -- common/autotest_common.sh@10 -- # set +x 00:04:59.322 { 00:04:59.322 "filename": "/tmp/spdk_mem_dump.txt" 00:04:59.322 } 00:04:59.322 03:48:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.322 03:48:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:59.322 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:59.322 1 heaps totaling size 814.000000 MiB 00:04:59.322 size: 814.000000 MiB heap id: 0 00:04:59.322 end heaps---------- 00:04:59.322 8 mempools totaling size 598.116089 MiB 00:04:59.322 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:59.322 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:59.322 size: 84.521057 MiB name: bdev_io_56532 00:04:59.322 size: 51.011292 MiB name: evtpool_56532 00:04:59.322 size: 50.003479 MiB name: msgpool_56532 00:04:59.322 size: 21.763794 MiB name: PDU_Pool 00:04:59.322 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:59.322 size: 0.026123 MiB name: Session_Pool 00:04:59.322 end mempools------- 00:04:59.322 6 memzones totaling size 4.142822 MiB 00:04:59.322 size: 1.000366 MiB name: RG_ring_0_56532 00:04:59.322 size: 1.000366 MiB name: RG_ring_1_56532 00:04:59.322 size: 1.000366 MiB name: RG_ring_4_56532 00:04:59.322 size: 1.000366 MiB name: RG_ring_5_56532 00:04:59.322 size: 0.125366 MiB name: RG_ring_2_56532 00:04:59.322 size: 0.015991 MiB name: RG_ring_3_56532 00:04:59.322 end memzones------- 00:04:59.322 03:48:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:59.322 heap id: 0 total size: 814.000000 MiB number of busy elements: 213 number of free elements: 15 00:04:59.322 list of free elements. size: 12.487854 MiB 00:04:59.322 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:59.322 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:59.322 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:59.322 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:59.322 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:59.322 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:59.322 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:59.322 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:59.322 element at address: 0x200000200000 with size: 0.837219 MiB 00:04:59.322 element at address: 0x20001aa00000 with size: 0.572632 MiB 00:04:59.322 element at address: 0x20000b200000 with size: 0.489990 MiB 00:04:59.322 element at address: 0x200000800000 with size: 0.487061 MiB 00:04:59.322 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:59.322 element at address: 0x200027e00000 with size: 0.398499 MiB 00:04:59.322 element at address: 0x200003a00000 with size: 0.351685 MiB 00:04:59.322 list of standard malloc elements. size: 199.249573 MiB 00:04:59.322 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:59.322 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:59.322 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:59.322 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:59.322 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:59.322 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:59.322 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:59.322 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:59.322 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:59.322 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:59.322 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:59.322 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:59.322 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:04:59.322 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:04:59.322 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:04:59.322 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:04:59.322 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:04:59.322 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:04:59.322 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:04:59.322 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:04:59.322 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:04:59.322 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:04:59.322 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:04:59.322 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:04:59.322 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:59.322 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:59.322 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:59.322 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:59.322 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:59.322 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:59.322 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:59.322 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:59.322 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:59.322 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:59.322 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:59.322 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:59.322 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:59.322 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:59.322 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:59.322 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:59.322 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:04:59.322 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:04:59.322 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:04:59.322 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:04:59.322 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:59.322 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:59.322 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:59.322 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:04:59.322 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:04:59.322 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:04:59.322 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:04:59.322 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:04:59.322 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:04:59.322 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:04:59.322 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:04:59.322 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:04:59.322 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:04:59.322 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:04:59.322 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:59.322 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:04:59.322 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:04:59.322 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:04:59.322 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:04:59.322 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:04:59.322 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:04:59.322 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:04:59.322 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:04:59.322 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:04:59.322 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:59.322 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:59.322 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:59.322 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:59.322 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:59.322 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:59.322 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:59.322 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:59.322 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:04:59.322 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:04:59.322 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:04:59.322 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:04:59.322 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:59.322 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:59.322 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:59.322 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:59.322 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:59.322 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:59.322 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:59.322 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:04:59.322 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:04:59.322 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:04:59.322 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:04:59.322 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:04:59.322 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:04:59.322 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:04:59.322 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:04:59.322 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:04:59.322 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:04:59.322 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:04:59.322 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:04:59.322 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:04:59.322 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:04:59.322 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:59.323 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e66040 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e66100 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6cd00 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:59.323 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:59.323 list of memzone associated elements. size: 602.262573 MiB 00:04:59.323 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:59.323 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:59.323 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:59.323 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:59.323 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:59.323 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_56532_0 00:04:59.323 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:59.323 associated memzone info: size: 48.002930 MiB name: MP_evtpool_56532_0 00:04:59.323 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:59.323 associated memzone info: size: 48.002930 MiB name: MP_msgpool_56532_0 00:04:59.323 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:59.323 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:59.323 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:59.323 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:59.323 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:59.323 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_56532 00:04:59.323 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:59.323 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_56532 00:04:59.323 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:59.323 associated memzone info: size: 1.007996 MiB name: MP_evtpool_56532 00:04:59.323 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:59.323 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:59.323 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:59.323 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:59.323 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:59.323 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:59.323 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:59.323 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:59.323 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:59.323 associated memzone info: size: 1.000366 MiB name: RG_ring_0_56532 00:04:59.323 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:59.323 associated memzone info: size: 1.000366 MiB name: RG_ring_1_56532 00:04:59.323 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:59.323 associated memzone info: size: 1.000366 MiB name: RG_ring_4_56532 00:04:59.323 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:59.323 associated memzone info: size: 1.000366 MiB name: RG_ring_5_56532 00:04:59.323 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:59.323 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_56532 00:04:59.323 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:59.323 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:59.323 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:59.324 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:59.324 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:59.324 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:59.324 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:59.324 associated memzone info: size: 0.125366 MiB name: RG_ring_2_56532 00:04:59.324 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:59.324 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:59.324 element at address: 0x200027e661c0 with size: 0.023743 MiB 00:04:59.324 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:59.324 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:59.324 associated memzone info: size: 0.015991 MiB name: RG_ring_3_56532 00:04:59.324 element at address: 0x200027e6c300 with size: 0.002441 MiB 00:04:59.324 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:59.324 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:04:59.324 associated memzone info: size: 0.000183 MiB name: MP_msgpool_56532 00:04:59.324 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:59.324 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_56532 00:04:59.324 element at address: 0x200027e6cdc0 with size: 0.000305 MiB 00:04:59.324 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:59.324 03:48:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:59.324 03:48:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 56532 00:04:59.324 03:48:34 -- common/autotest_common.sh@936 -- # '[' -z 56532 ']' 00:04:59.324 03:48:34 -- common/autotest_common.sh@940 -- # kill -0 56532 00:04:59.324 03:48:34 -- common/autotest_common.sh@941 -- # uname 00:04:59.324 03:48:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:59.324 03:48:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56532 00:04:59.324 03:48:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:59.324 killing process with pid 56532 00:04:59.324 03:48:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:59.324 03:48:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56532' 00:04:59.324 03:48:34 -- common/autotest_common.sh@955 -- # kill 56532 00:04:59.324 03:48:34 -- common/autotest_common.sh@960 -- # wait 56532 00:04:59.892 00:04:59.892 real 0m1.947s 00:04:59.892 user 0m2.108s 00:04:59.892 sys 0m0.441s 00:04:59.892 03:48:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:59.892 03:48:34 -- common/autotest_common.sh@10 -- # set +x 00:04:59.892 ************************************ 00:04:59.892 END TEST dpdk_mem_utility 00:04:59.892 ************************************ 00:04:59.892 03:48:34 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:59.892 03:48:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:59.892 03:48:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.892 03:48:34 -- common/autotest_common.sh@10 -- # set +x 00:04:59.892 ************************************ 00:04:59.892 START TEST event 00:04:59.892 ************************************ 00:04:59.892 03:48:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:00.150 * Looking for test storage... 00:05:00.150 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:00.150 03:48:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:00.150 03:48:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:00.150 03:48:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:00.150 03:48:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:00.150 03:48:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:00.150 03:48:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:00.150 03:48:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:00.150 03:48:35 -- scripts/common.sh@335 -- # IFS=.-: 00:05:00.150 03:48:35 -- scripts/common.sh@335 -- # read -ra ver1 00:05:00.150 03:48:35 -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.150 03:48:35 -- scripts/common.sh@336 -- # read -ra ver2 00:05:00.150 03:48:35 -- scripts/common.sh@337 -- # local 'op=<' 00:05:00.150 03:48:35 -- scripts/common.sh@339 -- # ver1_l=2 00:05:00.150 03:48:35 -- scripts/common.sh@340 -- # ver2_l=1 00:05:00.150 03:48:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:00.151 03:48:35 -- scripts/common.sh@343 -- # case "$op" in 00:05:00.151 03:48:35 -- scripts/common.sh@344 -- # : 1 00:05:00.151 03:48:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:00.151 03:48:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.151 03:48:35 -- scripts/common.sh@364 -- # decimal 1 00:05:00.151 03:48:35 -- scripts/common.sh@352 -- # local d=1 00:05:00.151 03:48:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.151 03:48:35 -- scripts/common.sh@354 -- # echo 1 00:05:00.151 03:48:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:00.151 03:48:35 -- scripts/common.sh@365 -- # decimal 2 00:05:00.151 03:48:35 -- scripts/common.sh@352 -- # local d=2 00:05:00.151 03:48:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.151 03:48:35 -- scripts/common.sh@354 -- # echo 2 00:05:00.151 03:48:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:00.151 03:48:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:00.151 03:48:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:00.151 03:48:35 -- scripts/common.sh@367 -- # return 0 00:05:00.151 03:48:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.151 03:48:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:00.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.151 --rc genhtml_branch_coverage=1 00:05:00.151 --rc genhtml_function_coverage=1 00:05:00.151 --rc genhtml_legend=1 00:05:00.151 --rc geninfo_all_blocks=1 00:05:00.151 --rc geninfo_unexecuted_blocks=1 00:05:00.151 00:05:00.151 ' 00:05:00.151 03:48:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:00.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.151 --rc genhtml_branch_coverage=1 00:05:00.151 --rc genhtml_function_coverage=1 00:05:00.151 --rc genhtml_legend=1 00:05:00.151 --rc geninfo_all_blocks=1 00:05:00.151 --rc geninfo_unexecuted_blocks=1 00:05:00.151 00:05:00.151 ' 00:05:00.151 03:48:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:00.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.151 --rc genhtml_branch_coverage=1 00:05:00.151 --rc genhtml_function_coverage=1 00:05:00.151 --rc genhtml_legend=1 00:05:00.151 --rc geninfo_all_blocks=1 00:05:00.151 --rc geninfo_unexecuted_blocks=1 00:05:00.151 00:05:00.151 ' 00:05:00.151 03:48:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:00.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.151 --rc genhtml_branch_coverage=1 00:05:00.151 --rc genhtml_function_coverage=1 00:05:00.151 --rc genhtml_legend=1 00:05:00.151 --rc geninfo_all_blocks=1 00:05:00.151 --rc geninfo_unexecuted_blocks=1 00:05:00.151 00:05:00.151 ' 00:05:00.151 03:48:35 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:00.151 03:48:35 -- bdev/nbd_common.sh@6 -- # set -e 00:05:00.151 03:48:35 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:00.151 03:48:35 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:00.151 03:48:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:00.151 03:48:35 -- common/autotest_common.sh@10 -- # set +x 00:05:00.151 ************************************ 00:05:00.151 START TEST event_perf 00:05:00.151 ************************************ 00:05:00.151 03:48:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:00.151 Running I/O for 1 seconds...[2024-11-08 03:48:35.193960] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:00.151 [2024-11-08 03:48:35.194072] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56634 ] 00:05:00.409 [2024-11-08 03:48:35.334779] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:00.409 [2024-11-08 03:48:35.465666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.409 [2024-11-08 03:48:35.465744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:00.409 [2024-11-08 03:48:35.465949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:00.409 [2024-11-08 03:48:35.465963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.782 Running I/O for 1 seconds... 00:05:01.782 lcore 0: 135586 00:05:01.782 lcore 1: 135586 00:05:01.782 lcore 2: 135584 00:05:01.782 lcore 3: 135586 00:05:01.782 done. 00:05:01.782 00:05:01.782 real 0m1.422s 00:05:01.782 user 0m4.226s 00:05:01.782 sys 0m0.075s 00:05:01.782 03:48:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:01.782 03:48:36 -- common/autotest_common.sh@10 -- # set +x 00:05:01.782 ************************************ 00:05:01.782 END TEST event_perf 00:05:01.782 ************************************ 00:05:01.782 03:48:36 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:01.782 03:48:36 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:01.782 03:48:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:01.782 03:48:36 -- common/autotest_common.sh@10 -- # set +x 00:05:01.782 ************************************ 00:05:01.782 START TEST event_reactor 00:05:01.782 ************************************ 00:05:01.782 03:48:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:01.782 [2024-11-08 03:48:36.664448] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:01.782 [2024-11-08 03:48:36.664554] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56667 ] 00:05:01.782 [2024-11-08 03:48:36.799476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.041 [2024-11-08 03:48:36.934862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.977 test_start 00:05:02.977 oneshot 00:05:02.977 tick 100 00:05:02.977 tick 100 00:05:02.977 tick 250 00:05:02.977 tick 100 00:05:02.977 tick 100 00:05:02.977 tick 250 00:05:02.977 tick 500 00:05:02.977 tick 100 00:05:02.977 tick 100 00:05:02.977 tick 100 00:05:02.977 tick 250 00:05:02.977 tick 100 00:05:02.977 tick 100 00:05:02.977 test_end 00:05:02.977 00:05:02.977 real 0m1.412s 00:05:02.977 user 0m1.239s 00:05:02.977 sys 0m0.066s 00:05:02.977 03:48:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:02.977 03:48:38 -- common/autotest_common.sh@10 -- # set +x 00:05:02.977 ************************************ 00:05:02.977 END TEST event_reactor 00:05:02.977 ************************************ 00:05:03.235 03:48:38 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:03.236 03:48:38 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:03.236 03:48:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:03.236 03:48:38 -- common/autotest_common.sh@10 -- # set +x 00:05:03.236 ************************************ 00:05:03.236 START TEST event_reactor_perf 00:05:03.236 ************************************ 00:05:03.236 03:48:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:03.236 [2024-11-08 03:48:38.127985] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:03.236 [2024-11-08 03:48:38.128607] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56702 ] 00:05:03.236 [2024-11-08 03:48:38.263351] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.236 [2024-11-08 03:48:38.338563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.630 test_start 00:05:04.630 test_end 00:05:04.630 Performance: 418770 events per second 00:05:04.630 00:05:04.630 real 0m1.325s 00:05:04.630 user 0m1.164s 00:05:04.630 sys 0m0.052s 00:05:04.630 03:48:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:04.630 03:48:39 -- common/autotest_common.sh@10 -- # set +x 00:05:04.630 ************************************ 00:05:04.630 END TEST event_reactor_perf 00:05:04.630 ************************************ 00:05:04.630 03:48:39 -- event/event.sh@49 -- # uname -s 00:05:04.630 03:48:39 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:04.631 03:48:39 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:04.631 03:48:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:04.631 03:48:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:04.631 03:48:39 -- common/autotest_common.sh@10 -- # set +x 00:05:04.631 ************************************ 00:05:04.631 START TEST event_scheduler 00:05:04.631 ************************************ 00:05:04.631 03:48:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:04.631 * Looking for test storage... 00:05:04.631 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:04.631 03:48:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:04.631 03:48:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:04.631 03:48:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:04.631 03:48:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:04.631 03:48:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:04.631 03:48:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:04.631 03:48:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:04.631 03:48:39 -- scripts/common.sh@335 -- # IFS=.-: 00:05:04.631 03:48:39 -- scripts/common.sh@335 -- # read -ra ver1 00:05:04.631 03:48:39 -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.631 03:48:39 -- scripts/common.sh@336 -- # read -ra ver2 00:05:04.631 03:48:39 -- scripts/common.sh@337 -- # local 'op=<' 00:05:04.631 03:48:39 -- scripts/common.sh@339 -- # ver1_l=2 00:05:04.631 03:48:39 -- scripts/common.sh@340 -- # ver2_l=1 00:05:04.631 03:48:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:04.631 03:48:39 -- scripts/common.sh@343 -- # case "$op" in 00:05:04.631 03:48:39 -- scripts/common.sh@344 -- # : 1 00:05:04.631 03:48:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:04.631 03:48:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.631 03:48:39 -- scripts/common.sh@364 -- # decimal 1 00:05:04.631 03:48:39 -- scripts/common.sh@352 -- # local d=1 00:05:04.631 03:48:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.631 03:48:39 -- scripts/common.sh@354 -- # echo 1 00:05:04.631 03:48:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:04.631 03:48:39 -- scripts/common.sh@365 -- # decimal 2 00:05:04.631 03:48:39 -- scripts/common.sh@352 -- # local d=2 00:05:04.631 03:48:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.631 03:48:39 -- scripts/common.sh@354 -- # echo 2 00:05:04.631 03:48:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:04.631 03:48:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:04.631 03:48:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:04.631 03:48:39 -- scripts/common.sh@367 -- # return 0 00:05:04.631 03:48:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.631 03:48:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:04.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.631 --rc genhtml_branch_coverage=1 00:05:04.631 --rc genhtml_function_coverage=1 00:05:04.631 --rc genhtml_legend=1 00:05:04.631 --rc geninfo_all_blocks=1 00:05:04.631 --rc geninfo_unexecuted_blocks=1 00:05:04.631 00:05:04.631 ' 00:05:04.631 03:48:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:04.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.631 --rc genhtml_branch_coverage=1 00:05:04.631 --rc genhtml_function_coverage=1 00:05:04.631 --rc genhtml_legend=1 00:05:04.631 --rc geninfo_all_blocks=1 00:05:04.631 --rc geninfo_unexecuted_blocks=1 00:05:04.631 00:05:04.631 ' 00:05:04.631 03:48:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:04.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.631 --rc genhtml_branch_coverage=1 00:05:04.631 --rc genhtml_function_coverage=1 00:05:04.631 --rc genhtml_legend=1 00:05:04.631 --rc geninfo_all_blocks=1 00:05:04.631 --rc geninfo_unexecuted_blocks=1 00:05:04.631 00:05:04.631 ' 00:05:04.631 03:48:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:04.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.631 --rc genhtml_branch_coverage=1 00:05:04.631 --rc genhtml_function_coverage=1 00:05:04.631 --rc genhtml_legend=1 00:05:04.631 --rc geninfo_all_blocks=1 00:05:04.631 --rc geninfo_unexecuted_blocks=1 00:05:04.631 00:05:04.631 ' 00:05:04.631 03:48:39 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:04.631 03:48:39 -- scheduler/scheduler.sh@35 -- # scheduler_pid=56771 00:05:04.631 03:48:39 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:04.631 03:48:39 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:04.631 03:48:39 -- scheduler/scheduler.sh@37 -- # waitforlisten 56771 00:05:04.631 03:48:39 -- common/autotest_common.sh@829 -- # '[' -z 56771 ']' 00:05:04.631 03:48:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.631 03:48:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:04.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.631 03:48:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.631 03:48:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:04.631 03:48:39 -- common/autotest_common.sh@10 -- # set +x 00:05:04.631 [2024-11-08 03:48:39.726509] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:04.631 [2024-11-08 03:48:39.726624] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56771 ] 00:05:04.888 [2024-11-08 03:48:39.869141] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:04.888 [2024-11-08 03:48:39.960953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.888 [2024-11-08 03:48:39.961115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.888 [2024-11-08 03:48:39.961138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:04.888 [2024-11-08 03:48:39.961141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:05.146 03:48:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:05.146 03:48:40 -- common/autotest_common.sh@862 -- # return 0 00:05:05.146 03:48:40 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:05.146 03:48:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.146 03:48:40 -- common/autotest_common.sh@10 -- # set +x 00:05:05.146 POWER: Env isn't set yet! 00:05:05.146 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:05.146 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:05.146 POWER: Cannot set governor of lcore 0 to userspace 00:05:05.146 POWER: Attempting to initialise PSTAT power management... 00:05:05.146 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:05.146 POWER: Cannot set governor of lcore 0 to performance 00:05:05.146 POWER: Attempting to initialise AMD PSTATE power management... 00:05:05.146 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:05.146 POWER: Cannot set governor of lcore 0 to userspace 00:05:05.146 POWER: Attempting to initialise CPPC power management... 00:05:05.146 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:05.146 POWER: Cannot set governor of lcore 0 to userspace 00:05:05.146 POWER: Attempting to initialise VM power management... 00:05:05.146 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:05.146 POWER: Unable to set Power Management Environment for lcore 0 00:05:05.146 [2024-11-08 03:48:40.022834] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:05:05.146 [2024-11-08 03:48:40.022847] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:05:05.146 [2024-11-08 03:48:40.022855] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:05.146 [2024-11-08 03:48:40.022868] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:05.146 [2024-11-08 03:48:40.022876] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:05.146 [2024-11-08 03:48:40.022883] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:05.146 03:48:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.146 03:48:40 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:05.146 03:48:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.146 03:48:40 -- common/autotest_common.sh@10 -- # set +x 00:05:05.146 [2024-11-08 03:48:40.111498] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:05.146 03:48:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.146 03:48:40 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:05.146 03:48:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.146 03:48:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.146 03:48:40 -- common/autotest_common.sh@10 -- # set +x 00:05:05.146 ************************************ 00:05:05.146 START TEST scheduler_create_thread 00:05:05.146 ************************************ 00:05:05.146 03:48:40 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:05:05.146 03:48:40 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:05.146 03:48:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.146 03:48:40 -- common/autotest_common.sh@10 -- # set +x 00:05:05.146 2 00:05:05.146 03:48:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.146 03:48:40 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:05.146 03:48:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.146 03:48:40 -- common/autotest_common.sh@10 -- # set +x 00:05:05.146 3 00:05:05.146 03:48:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.146 03:48:40 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:05.146 03:48:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.146 03:48:40 -- common/autotest_common.sh@10 -- # set +x 00:05:05.146 4 00:05:05.146 03:48:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.146 03:48:40 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:05.146 03:48:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.146 03:48:40 -- common/autotest_common.sh@10 -- # set +x 00:05:05.146 5 00:05:05.146 03:48:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.146 03:48:40 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:05.146 03:48:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.146 03:48:40 -- common/autotest_common.sh@10 -- # set +x 00:05:05.146 6 00:05:05.146 03:48:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.146 03:48:40 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:05.146 03:48:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.147 03:48:40 -- common/autotest_common.sh@10 -- # set +x 00:05:05.147 7 00:05:05.147 03:48:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.147 03:48:40 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:05.147 03:48:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.147 03:48:40 -- common/autotest_common.sh@10 -- # set +x 00:05:05.147 8 00:05:05.147 03:48:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.147 03:48:40 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:05.147 03:48:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.147 03:48:40 -- common/autotest_common.sh@10 -- # set +x 00:05:05.147 9 00:05:05.147 03:48:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.147 03:48:40 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:05.147 03:48:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.147 03:48:40 -- common/autotest_common.sh@10 -- # set +x 00:05:05.147 10 00:05:05.147 03:48:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.147 03:48:40 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:05.147 03:48:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.147 03:48:40 -- common/autotest_common.sh@10 -- # set +x 00:05:05.147 03:48:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.147 03:48:40 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:05.147 03:48:40 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:05.147 03:48:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.147 03:48:40 -- common/autotest_common.sh@10 -- # set +x 00:05:05.147 03:48:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.147 03:48:40 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:05.147 03:48:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.147 03:48:40 -- common/autotest_common.sh@10 -- # set +x 00:05:07.045 03:48:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.045 03:48:41 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:07.045 03:48:41 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:07.045 03:48:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.045 03:48:41 -- common/autotest_common.sh@10 -- # set +x 00:05:07.979 ************************************ 00:05:07.979 END TEST scheduler_create_thread 00:05:07.979 ************************************ 00:05:07.979 03:48:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.979 00:05:07.979 real 0m2.612s 00:05:07.979 user 0m0.017s 00:05:07.979 sys 0m0.007s 00:05:07.979 03:48:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:07.979 03:48:42 -- common/autotest_common.sh@10 -- # set +x 00:05:07.979 03:48:42 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:07.979 03:48:42 -- scheduler/scheduler.sh@46 -- # killprocess 56771 00:05:07.979 03:48:42 -- common/autotest_common.sh@936 -- # '[' -z 56771 ']' 00:05:07.979 03:48:42 -- common/autotest_common.sh@940 -- # kill -0 56771 00:05:07.979 03:48:42 -- common/autotest_common.sh@941 -- # uname 00:05:07.979 03:48:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:07.979 03:48:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56771 00:05:07.979 killing process with pid 56771 00:05:07.979 03:48:42 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:07.979 03:48:42 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:07.979 03:48:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56771' 00:05:07.979 03:48:42 -- common/autotest_common.sh@955 -- # kill 56771 00:05:07.979 03:48:42 -- common/autotest_common.sh@960 -- # wait 56771 00:05:08.237 [2024-11-08 03:48:43.215917] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:08.496 ************************************ 00:05:08.496 END TEST event_scheduler 00:05:08.496 ************************************ 00:05:08.496 00:05:08.496 real 0m3.972s 00:05:08.496 user 0m5.815s 00:05:08.496 sys 0m0.374s 00:05:08.496 03:48:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:08.496 03:48:43 -- common/autotest_common.sh@10 -- # set +x 00:05:08.496 03:48:43 -- event/event.sh@51 -- # modprobe -n nbd 00:05:08.496 03:48:43 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:08.496 03:48:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:08.496 03:48:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.496 03:48:43 -- common/autotest_common.sh@10 -- # set +x 00:05:08.496 ************************************ 00:05:08.496 START TEST app_repeat 00:05:08.496 ************************************ 00:05:08.496 03:48:43 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:05:08.496 03:48:43 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.496 03:48:43 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.496 03:48:43 -- event/event.sh@13 -- # local nbd_list 00:05:08.496 03:48:43 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:08.496 03:48:43 -- event/event.sh@14 -- # local bdev_list 00:05:08.496 03:48:43 -- event/event.sh@15 -- # local repeat_times=4 00:05:08.496 03:48:43 -- event/event.sh@17 -- # modprobe nbd 00:05:08.496 Process app_repeat pid: 56875 00:05:08.496 spdk_app_start Round 0 00:05:08.496 03:48:43 -- event/event.sh@19 -- # repeat_pid=56875 00:05:08.496 03:48:43 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.496 03:48:43 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 56875' 00:05:08.496 03:48:43 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:08.496 03:48:43 -- event/event.sh@23 -- # for i in {0..2} 00:05:08.496 03:48:43 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:08.496 03:48:43 -- event/event.sh@25 -- # waitforlisten 56875 /var/tmp/spdk-nbd.sock 00:05:08.496 03:48:43 -- common/autotest_common.sh@829 -- # '[' -z 56875 ']' 00:05:08.496 03:48:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:08.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:08.496 03:48:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:08.496 03:48:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:08.496 03:48:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:08.496 03:48:43 -- common/autotest_common.sh@10 -- # set +x 00:05:08.496 [2024-11-08 03:48:43.552780] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:08.496 [2024-11-08 03:48:43.552881] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56875 ] 00:05:08.755 [2024-11-08 03:48:43.687354] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:08.755 [2024-11-08 03:48:43.771526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.755 [2024-11-08 03:48:43.771545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.689 03:48:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:09.689 03:48:44 -- common/autotest_common.sh@862 -- # return 0 00:05:09.689 03:48:44 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.689 Malloc0 00:05:09.947 03:48:44 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.947 Malloc1 00:05:09.947 03:48:45 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.947 03:48:45 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.947 03:48:45 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.947 03:48:45 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:09.947 03:48:45 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.947 03:48:45 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:09.947 03:48:45 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.947 03:48:45 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.947 03:48:45 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.947 03:48:45 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:10.206 03:48:45 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.206 03:48:45 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:10.206 03:48:45 -- bdev/nbd_common.sh@12 -- # local i 00:05:10.206 03:48:45 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:10.206 03:48:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.206 03:48:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:10.464 /dev/nbd0 00:05:10.464 03:48:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:10.464 03:48:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:10.464 03:48:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:10.464 03:48:45 -- common/autotest_common.sh@867 -- # local i 00:05:10.464 03:48:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:10.464 03:48:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:10.464 03:48:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:10.464 03:48:45 -- common/autotest_common.sh@871 -- # break 00:05:10.465 03:48:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:10.465 03:48:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:10.465 03:48:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.465 1+0 records in 00:05:10.465 1+0 records out 00:05:10.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249942 s, 16.4 MB/s 00:05:10.465 03:48:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.465 03:48:45 -- common/autotest_common.sh@884 -- # size=4096 00:05:10.465 03:48:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.465 03:48:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:10.465 03:48:45 -- common/autotest_common.sh@887 -- # return 0 00:05:10.465 03:48:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.465 03:48:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.465 03:48:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:10.723 /dev/nbd1 00:05:10.723 03:48:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:10.723 03:48:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:10.723 03:48:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:10.723 03:48:45 -- common/autotest_common.sh@867 -- # local i 00:05:10.723 03:48:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:10.723 03:48:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:10.723 03:48:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:10.723 03:48:45 -- common/autotest_common.sh@871 -- # break 00:05:10.723 03:48:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:10.723 03:48:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:10.723 03:48:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.723 1+0 records in 00:05:10.723 1+0 records out 00:05:10.723 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256722 s, 16.0 MB/s 00:05:10.723 03:48:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.723 03:48:45 -- common/autotest_common.sh@884 -- # size=4096 00:05:10.723 03:48:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.723 03:48:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:10.723 03:48:45 -- common/autotest_common.sh@887 -- # return 0 00:05:10.723 03:48:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.723 03:48:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.723 03:48:45 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.723 03:48:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.723 03:48:45 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.982 03:48:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:10.982 { 00:05:10.982 "bdev_name": "Malloc0", 00:05:10.982 "nbd_device": "/dev/nbd0" 00:05:10.982 }, 00:05:10.982 { 00:05:10.982 "bdev_name": "Malloc1", 00:05:10.982 "nbd_device": "/dev/nbd1" 00:05:10.982 } 00:05:10.982 ]' 00:05:10.982 03:48:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.982 03:48:46 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:10.982 { 00:05:10.982 "bdev_name": "Malloc0", 00:05:10.982 "nbd_device": "/dev/nbd0" 00:05:10.982 }, 00:05:10.982 { 00:05:10.982 "bdev_name": "Malloc1", 00:05:10.982 "nbd_device": "/dev/nbd1" 00:05:10.982 } 00:05:10.982 ]' 00:05:10.982 03:48:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:10.982 /dev/nbd1' 00:05:10.982 03:48:46 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:10.982 /dev/nbd1' 00:05:10.982 03:48:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.982 03:48:46 -- bdev/nbd_common.sh@65 -- # count=2 00:05:10.982 03:48:46 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:10.982 03:48:46 -- bdev/nbd_common.sh@95 -- # count=2 00:05:10.982 03:48:46 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:10.982 03:48:46 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:10.982 03:48:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.982 03:48:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.982 03:48:46 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:10.982 03:48:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:10.982 03:48:46 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:10.982 03:48:46 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:10.982 256+0 records in 00:05:10.982 256+0 records out 00:05:10.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105532 s, 99.4 MB/s 00:05:10.982 03:48:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.982 03:48:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:11.241 256+0 records in 00:05:11.241 256+0 records out 00:05:11.241 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242953 s, 43.2 MB/s 00:05:11.241 03:48:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.241 03:48:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:11.241 256+0 records in 00:05:11.241 256+0 records out 00:05:11.241 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0268907 s, 39.0 MB/s 00:05:11.241 03:48:46 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:11.241 03:48:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.241 03:48:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.241 03:48:46 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:11.241 03:48:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.241 03:48:46 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:11.241 03:48:46 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:11.241 03:48:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.241 03:48:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:11.241 03:48:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.241 03:48:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:11.241 03:48:46 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.241 03:48:46 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:11.241 03:48:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.241 03:48:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.241 03:48:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:11.241 03:48:46 -- bdev/nbd_common.sh@51 -- # local i 00:05:11.241 03:48:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.241 03:48:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:11.499 03:48:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:11.499 03:48:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:11.499 03:48:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:11.499 03:48:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.499 03:48:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.499 03:48:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:11.499 03:48:46 -- bdev/nbd_common.sh@41 -- # break 00:05:11.499 03:48:46 -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.499 03:48:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.499 03:48:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:11.758 03:48:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:11.758 03:48:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:11.758 03:48:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:11.758 03:48:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.758 03:48:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.758 03:48:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:11.758 03:48:46 -- bdev/nbd_common.sh@41 -- # break 00:05:11.758 03:48:46 -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.758 03:48:46 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.758 03:48:46 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.758 03:48:46 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.016 03:48:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:12.016 03:48:46 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:12.016 03:48:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.016 03:48:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:12.016 03:48:46 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:12.016 03:48:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.016 03:48:46 -- bdev/nbd_common.sh@65 -- # true 00:05:12.016 03:48:46 -- bdev/nbd_common.sh@65 -- # count=0 00:05:12.016 03:48:46 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:12.016 03:48:46 -- bdev/nbd_common.sh@104 -- # count=0 00:05:12.016 03:48:46 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:12.016 03:48:46 -- bdev/nbd_common.sh@109 -- # return 0 00:05:12.016 03:48:46 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:12.275 03:48:47 -- event/event.sh@35 -- # sleep 3 00:05:12.534 [2024-11-08 03:48:47.594410] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:12.806 [2024-11-08 03:48:47.672191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.806 [2024-11-08 03:48:47.672197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.806 [2024-11-08 03:48:47.744764] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:12.806 [2024-11-08 03:48:47.744841] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:15.358 03:48:50 -- event/event.sh@23 -- # for i in {0..2} 00:05:15.358 spdk_app_start Round 1 00:05:15.358 03:48:50 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:15.358 03:48:50 -- event/event.sh@25 -- # waitforlisten 56875 /var/tmp/spdk-nbd.sock 00:05:15.358 03:48:50 -- common/autotest_common.sh@829 -- # '[' -z 56875 ']' 00:05:15.358 03:48:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:15.358 03:48:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:15.358 03:48:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:15.358 03:48:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.358 03:48:50 -- common/autotest_common.sh@10 -- # set +x 00:05:15.616 03:48:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.616 03:48:50 -- common/autotest_common.sh@862 -- # return 0 00:05:15.616 03:48:50 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.874 Malloc0 00:05:15.874 03:48:50 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.132 Malloc1 00:05:16.132 03:48:51 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:16.132 03:48:51 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.132 03:48:51 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.132 03:48:51 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:16.132 03:48:51 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.132 03:48:51 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:16.132 03:48:51 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:16.132 03:48:51 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.132 03:48:51 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.132 03:48:51 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:16.132 03:48:51 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.132 03:48:51 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:16.132 03:48:51 -- bdev/nbd_common.sh@12 -- # local i 00:05:16.132 03:48:51 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:16.132 03:48:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.132 03:48:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:16.390 /dev/nbd0 00:05:16.390 03:48:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:16.390 03:48:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:16.390 03:48:51 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:16.390 03:48:51 -- common/autotest_common.sh@867 -- # local i 00:05:16.390 03:48:51 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:16.390 03:48:51 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:16.390 03:48:51 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:16.390 03:48:51 -- common/autotest_common.sh@871 -- # break 00:05:16.390 03:48:51 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:16.390 03:48:51 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:16.390 03:48:51 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:16.390 1+0 records in 00:05:16.390 1+0 records out 00:05:16.390 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229502 s, 17.8 MB/s 00:05:16.390 03:48:51 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.390 03:48:51 -- common/autotest_common.sh@884 -- # size=4096 00:05:16.390 03:48:51 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.390 03:48:51 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:16.390 03:48:51 -- common/autotest_common.sh@887 -- # return 0 00:05:16.390 03:48:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:16.390 03:48:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.390 03:48:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:16.649 /dev/nbd1 00:05:16.649 03:48:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:16.649 03:48:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:16.649 03:48:51 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:16.649 03:48:51 -- common/autotest_common.sh@867 -- # local i 00:05:16.649 03:48:51 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:16.649 03:48:51 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:16.649 03:48:51 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:16.649 03:48:51 -- common/autotest_common.sh@871 -- # break 00:05:16.649 03:48:51 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:16.649 03:48:51 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:16.649 03:48:51 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:16.649 1+0 records in 00:05:16.649 1+0 records out 00:05:16.649 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346655 s, 11.8 MB/s 00:05:16.649 03:48:51 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.649 03:48:51 -- common/autotest_common.sh@884 -- # size=4096 00:05:16.649 03:48:51 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.649 03:48:51 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:16.649 03:48:51 -- common/autotest_common.sh@887 -- # return 0 00:05:16.649 03:48:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:16.649 03:48:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.649 03:48:51 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:16.649 03:48:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.649 03:48:51 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.907 03:48:51 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:16.907 { 00:05:16.907 "bdev_name": "Malloc0", 00:05:16.907 "nbd_device": "/dev/nbd0" 00:05:16.907 }, 00:05:16.907 { 00:05:16.907 "bdev_name": "Malloc1", 00:05:16.907 "nbd_device": "/dev/nbd1" 00:05:16.907 } 00:05:16.907 ]' 00:05:16.907 03:48:51 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:16.907 { 00:05:16.907 "bdev_name": "Malloc0", 00:05:16.907 "nbd_device": "/dev/nbd0" 00:05:16.907 }, 00:05:16.907 { 00:05:16.907 "bdev_name": "Malloc1", 00:05:16.907 "nbd_device": "/dev/nbd1" 00:05:16.907 } 00:05:16.907 ]' 00:05:16.907 03:48:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.907 03:48:51 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:16.907 /dev/nbd1' 00:05:16.907 03:48:51 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:16.907 /dev/nbd1' 00:05:16.907 03:48:51 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.907 03:48:51 -- bdev/nbd_common.sh@65 -- # count=2 00:05:16.907 03:48:51 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:16.907 03:48:51 -- bdev/nbd_common.sh@95 -- # count=2 00:05:16.907 03:48:51 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:16.907 03:48:51 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:16.907 03:48:51 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.907 03:48:51 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:16.907 03:48:51 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:16.907 03:48:51 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:16.907 03:48:51 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:16.907 03:48:51 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:16.907 256+0 records in 00:05:16.907 256+0 records out 00:05:16.907 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0099973 s, 105 MB/s 00:05:16.907 03:48:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:16.907 03:48:51 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:16.907 256+0 records in 00:05:16.907 256+0 records out 00:05:16.907 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237385 s, 44.2 MB/s 00:05:16.907 03:48:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:16.907 03:48:51 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:16.907 256+0 records in 00:05:16.907 256+0 records out 00:05:16.907 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027497 s, 38.1 MB/s 00:05:16.907 03:48:52 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:16.907 03:48:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.907 03:48:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:16.907 03:48:52 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:16.907 03:48:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:16.907 03:48:52 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:16.907 03:48:52 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:16.907 03:48:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:16.907 03:48:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:16.907 03:48:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:16.907 03:48:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:17.166 03:48:52 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:17.166 03:48:52 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:17.166 03:48:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.166 03:48:52 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.166 03:48:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:17.166 03:48:52 -- bdev/nbd_common.sh@51 -- # local i 00:05:17.166 03:48:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.166 03:48:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:17.166 03:48:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:17.166 03:48:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:17.166 03:48:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:17.166 03:48:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:17.166 03:48:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:17.166 03:48:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:17.166 03:48:52 -- bdev/nbd_common.sh@41 -- # break 00:05:17.166 03:48:52 -- bdev/nbd_common.sh@45 -- # return 0 00:05:17.166 03:48:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.166 03:48:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:17.425 03:48:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:17.425 03:48:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:17.425 03:48:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:17.425 03:48:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:17.425 03:48:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:17.425 03:48:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:17.425 03:48:52 -- bdev/nbd_common.sh@41 -- # break 00:05:17.425 03:48:52 -- bdev/nbd_common.sh@45 -- # return 0 00:05:17.425 03:48:52 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.425 03:48:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.425 03:48:52 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.992 03:48:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:17.992 03:48:52 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:17.992 03:48:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.992 03:48:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:17.992 03:48:52 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:17.992 03:48:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.992 03:48:52 -- bdev/nbd_common.sh@65 -- # true 00:05:17.992 03:48:52 -- bdev/nbd_common.sh@65 -- # count=0 00:05:17.992 03:48:52 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:17.992 03:48:52 -- bdev/nbd_common.sh@104 -- # count=0 00:05:17.992 03:48:52 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:17.992 03:48:52 -- bdev/nbd_common.sh@109 -- # return 0 00:05:17.992 03:48:52 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:17.992 03:48:53 -- event/event.sh@35 -- # sleep 3 00:05:18.559 [2024-11-08 03:48:53.387445] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:18.559 [2024-11-08 03:48:53.456956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.559 [2024-11-08 03:48:53.456977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.559 [2024-11-08 03:48:53.529308] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:18.559 [2024-11-08 03:48:53.529380] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:21.090 03:48:56 -- event/event.sh@23 -- # for i in {0..2} 00:05:21.090 spdk_app_start Round 2 00:05:21.090 03:48:56 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:21.090 03:48:56 -- event/event.sh@25 -- # waitforlisten 56875 /var/tmp/spdk-nbd.sock 00:05:21.090 03:48:56 -- common/autotest_common.sh@829 -- # '[' -z 56875 ']' 00:05:21.090 03:48:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:21.091 03:48:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:21.091 03:48:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:21.091 03:48:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.091 03:48:56 -- common/autotest_common.sh@10 -- # set +x 00:05:21.349 03:48:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.349 03:48:56 -- common/autotest_common.sh@862 -- # return 0 00:05:21.349 03:48:56 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.609 Malloc0 00:05:21.609 03:48:56 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.868 Malloc1 00:05:21.868 03:48:56 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.868 03:48:56 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.868 03:48:56 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.868 03:48:56 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:21.868 03:48:56 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.868 03:48:56 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:21.868 03:48:56 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.868 03:48:56 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.868 03:48:56 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.868 03:48:56 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:21.868 03:48:56 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.868 03:48:56 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:21.868 03:48:56 -- bdev/nbd_common.sh@12 -- # local i 00:05:21.868 03:48:56 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:21.868 03:48:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.868 03:48:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:22.126 /dev/nbd0 00:05:22.126 03:48:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:22.126 03:48:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:22.126 03:48:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:22.126 03:48:57 -- common/autotest_common.sh@867 -- # local i 00:05:22.126 03:48:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:22.126 03:48:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:22.126 03:48:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:22.126 03:48:57 -- common/autotest_common.sh@871 -- # break 00:05:22.126 03:48:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:22.126 03:48:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:22.384 03:48:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.384 1+0 records in 00:05:22.384 1+0 records out 00:05:22.384 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367565 s, 11.1 MB/s 00:05:22.384 03:48:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.384 03:48:57 -- common/autotest_common.sh@884 -- # size=4096 00:05:22.384 03:48:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.385 03:48:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:22.385 03:48:57 -- common/autotest_common.sh@887 -- # return 0 00:05:22.385 03:48:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.385 03:48:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.385 03:48:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:22.385 /dev/nbd1 00:05:22.385 03:48:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:22.385 03:48:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:22.385 03:48:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:22.385 03:48:57 -- common/autotest_common.sh@867 -- # local i 00:05:22.385 03:48:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:22.385 03:48:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:22.385 03:48:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:22.385 03:48:57 -- common/autotest_common.sh@871 -- # break 00:05:22.385 03:48:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:22.385 03:48:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:22.385 03:48:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.385 1+0 records in 00:05:22.385 1+0 records out 00:05:22.385 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237143 s, 17.3 MB/s 00:05:22.385 03:48:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.385 03:48:57 -- common/autotest_common.sh@884 -- # size=4096 00:05:22.385 03:48:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.385 03:48:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:22.385 03:48:57 -- common/autotest_common.sh@887 -- # return 0 00:05:22.385 03:48:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.385 03:48:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.643 03:48:57 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.643 03:48:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.643 03:48:57 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.643 03:48:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:22.643 { 00:05:22.643 "bdev_name": "Malloc0", 00:05:22.643 "nbd_device": "/dev/nbd0" 00:05:22.643 }, 00:05:22.643 { 00:05:22.643 "bdev_name": "Malloc1", 00:05:22.643 "nbd_device": "/dev/nbd1" 00:05:22.643 } 00:05:22.643 ]' 00:05:22.643 03:48:57 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:22.643 { 00:05:22.643 "bdev_name": "Malloc0", 00:05:22.643 "nbd_device": "/dev/nbd0" 00:05:22.643 }, 00:05:22.643 { 00:05:22.643 "bdev_name": "Malloc1", 00:05:22.643 "nbd_device": "/dev/nbd1" 00:05:22.643 } 00:05:22.643 ]' 00:05:22.643 03:48:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.901 03:48:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:22.901 /dev/nbd1' 00:05:22.901 03:48:57 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:22.901 /dev/nbd1' 00:05:22.901 03:48:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.901 03:48:57 -- bdev/nbd_common.sh@65 -- # count=2 00:05:22.901 03:48:57 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:22.901 03:48:57 -- bdev/nbd_common.sh@95 -- # count=2 00:05:22.901 03:48:57 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:22.901 03:48:57 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:22.901 03:48:57 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.901 03:48:57 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.901 03:48:57 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:22.901 03:48:57 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.901 03:48:57 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:22.901 03:48:57 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:22.901 256+0 records in 00:05:22.901 256+0 records out 00:05:22.902 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00736443 s, 142 MB/s 00:05:22.902 03:48:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.902 03:48:57 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:22.902 256+0 records in 00:05:22.902 256+0 records out 00:05:22.902 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237369 s, 44.2 MB/s 00:05:22.902 03:48:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.902 03:48:57 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:22.902 256+0 records in 00:05:22.902 256+0 records out 00:05:22.902 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244 s, 43.0 MB/s 00:05:22.902 03:48:57 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:22.902 03:48:57 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.902 03:48:57 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.902 03:48:57 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:22.902 03:48:57 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.902 03:48:57 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:22.902 03:48:57 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:22.902 03:48:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.902 03:48:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:22.902 03:48:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.902 03:48:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:22.902 03:48:57 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.902 03:48:57 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:22.902 03:48:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.902 03:48:57 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.902 03:48:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:22.902 03:48:57 -- bdev/nbd_common.sh@51 -- # local i 00:05:22.902 03:48:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.902 03:48:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:23.160 03:48:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:23.160 03:48:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:23.160 03:48:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:23.160 03:48:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.160 03:48:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.160 03:48:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:23.160 03:48:58 -- bdev/nbd_common.sh@41 -- # break 00:05:23.160 03:48:58 -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.160 03:48:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.160 03:48:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:23.727 03:48:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:23.727 03:48:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:23.727 03:48:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:23.727 03:48:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.727 03:48:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.727 03:48:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:23.727 03:48:58 -- bdev/nbd_common.sh@41 -- # break 00:05:23.727 03:48:58 -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.727 03:48:58 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.727 03:48:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.727 03:48:58 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.727 03:48:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:23.727 03:48:58 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:23.727 03:48:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.727 03:48:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:23.727 03:48:58 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:23.727 03:48:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.727 03:48:58 -- bdev/nbd_common.sh@65 -- # true 00:05:23.727 03:48:58 -- bdev/nbd_common.sh@65 -- # count=0 00:05:23.727 03:48:58 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:23.727 03:48:58 -- bdev/nbd_common.sh@104 -- # count=0 00:05:23.727 03:48:58 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:23.727 03:48:58 -- bdev/nbd_common.sh@109 -- # return 0 00:05:23.727 03:48:58 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:24.294 03:48:59 -- event/event.sh@35 -- # sleep 3 00:05:24.294 [2024-11-08 03:48:59.362387] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.553 [2024-11-08 03:48:59.466866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.553 [2024-11-08 03:48:59.466879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.553 [2024-11-08 03:48:59.522122] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:24.553 [2024-11-08 03:48:59.522226] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:27.083 03:49:02 -- event/event.sh@38 -- # waitforlisten 56875 /var/tmp/spdk-nbd.sock 00:05:27.083 03:49:02 -- common/autotest_common.sh@829 -- # '[' -z 56875 ']' 00:05:27.083 03:49:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:27.083 03:49:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:27.083 03:49:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:27.083 03:49:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.083 03:49:02 -- common/autotest_common.sh@10 -- # set +x 00:05:27.342 03:49:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.342 03:49:02 -- common/autotest_common.sh@862 -- # return 0 00:05:27.342 03:49:02 -- event/event.sh@39 -- # killprocess 56875 00:05:27.342 03:49:02 -- common/autotest_common.sh@936 -- # '[' -z 56875 ']' 00:05:27.342 03:49:02 -- common/autotest_common.sh@940 -- # kill -0 56875 00:05:27.342 03:49:02 -- common/autotest_common.sh@941 -- # uname 00:05:27.342 03:49:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:27.342 03:49:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56875 00:05:27.342 03:49:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:27.342 03:49:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:27.342 killing process with pid 56875 00:05:27.342 03:49:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56875' 00:05:27.342 03:49:02 -- common/autotest_common.sh@955 -- # kill 56875 00:05:27.342 03:49:02 -- common/autotest_common.sh@960 -- # wait 56875 00:05:27.599 spdk_app_start is called in Round 0. 00:05:27.599 Shutdown signal received, stop current app iteration 00:05:27.599 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:27.599 spdk_app_start is called in Round 1. 00:05:27.599 Shutdown signal received, stop current app iteration 00:05:27.599 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:27.599 spdk_app_start is called in Round 2. 00:05:27.599 Shutdown signal received, stop current app iteration 00:05:27.599 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:27.600 spdk_app_start is called in Round 3. 00:05:27.600 Shutdown signal received, stop current app iteration 00:05:27.600 03:49:02 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:27.600 03:49:02 -- event/event.sh@42 -- # return 0 00:05:27.600 00:05:27.600 real 0m19.098s 00:05:27.600 user 0m42.561s 00:05:27.600 sys 0m3.021s 00:05:27.600 03:49:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:27.600 03:49:02 -- common/autotest_common.sh@10 -- # set +x 00:05:27.600 ************************************ 00:05:27.600 END TEST app_repeat 00:05:27.600 ************************************ 00:05:27.600 03:49:02 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:27.600 03:49:02 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:27.600 03:49:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:27.600 03:49:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.600 03:49:02 -- common/autotest_common.sh@10 -- # set +x 00:05:27.600 ************************************ 00:05:27.600 START TEST cpu_locks 00:05:27.600 ************************************ 00:05:27.600 03:49:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:27.857 * Looking for test storage... 00:05:27.857 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:27.857 03:49:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:27.857 03:49:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:27.857 03:49:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:27.857 03:49:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:27.857 03:49:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:27.857 03:49:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:27.857 03:49:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:27.857 03:49:02 -- scripts/common.sh@335 -- # IFS=.-: 00:05:27.857 03:49:02 -- scripts/common.sh@335 -- # read -ra ver1 00:05:27.857 03:49:02 -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.857 03:49:02 -- scripts/common.sh@336 -- # read -ra ver2 00:05:27.857 03:49:02 -- scripts/common.sh@337 -- # local 'op=<' 00:05:27.857 03:49:02 -- scripts/common.sh@339 -- # ver1_l=2 00:05:27.857 03:49:02 -- scripts/common.sh@340 -- # ver2_l=1 00:05:27.857 03:49:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:27.857 03:49:02 -- scripts/common.sh@343 -- # case "$op" in 00:05:27.857 03:49:02 -- scripts/common.sh@344 -- # : 1 00:05:27.857 03:49:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:27.857 03:49:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.857 03:49:02 -- scripts/common.sh@364 -- # decimal 1 00:05:27.857 03:49:02 -- scripts/common.sh@352 -- # local d=1 00:05:27.857 03:49:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.857 03:49:02 -- scripts/common.sh@354 -- # echo 1 00:05:27.857 03:49:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:27.857 03:49:02 -- scripts/common.sh@365 -- # decimal 2 00:05:27.857 03:49:02 -- scripts/common.sh@352 -- # local d=2 00:05:27.857 03:49:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.857 03:49:02 -- scripts/common.sh@354 -- # echo 2 00:05:27.857 03:49:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:27.857 03:49:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:27.857 03:49:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:27.857 03:49:02 -- scripts/common.sh@367 -- # return 0 00:05:27.857 03:49:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.857 03:49:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:27.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.857 --rc genhtml_branch_coverage=1 00:05:27.857 --rc genhtml_function_coverage=1 00:05:27.857 --rc genhtml_legend=1 00:05:27.857 --rc geninfo_all_blocks=1 00:05:27.857 --rc geninfo_unexecuted_blocks=1 00:05:27.857 00:05:27.857 ' 00:05:27.857 03:49:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:27.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.857 --rc genhtml_branch_coverage=1 00:05:27.857 --rc genhtml_function_coverage=1 00:05:27.857 --rc genhtml_legend=1 00:05:27.857 --rc geninfo_all_blocks=1 00:05:27.857 --rc geninfo_unexecuted_blocks=1 00:05:27.857 00:05:27.857 ' 00:05:27.857 03:49:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:27.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.857 --rc genhtml_branch_coverage=1 00:05:27.857 --rc genhtml_function_coverage=1 00:05:27.857 --rc genhtml_legend=1 00:05:27.857 --rc geninfo_all_blocks=1 00:05:27.857 --rc geninfo_unexecuted_blocks=1 00:05:27.857 00:05:27.857 ' 00:05:27.857 03:49:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:27.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.857 --rc genhtml_branch_coverage=1 00:05:27.857 --rc genhtml_function_coverage=1 00:05:27.857 --rc genhtml_legend=1 00:05:27.857 --rc geninfo_all_blocks=1 00:05:27.857 --rc geninfo_unexecuted_blocks=1 00:05:27.857 00:05:27.857 ' 00:05:27.857 03:49:02 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:27.857 03:49:02 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:27.857 03:49:02 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:27.857 03:49:02 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:27.857 03:49:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:27.858 03:49:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.858 03:49:02 -- common/autotest_common.sh@10 -- # set +x 00:05:27.858 ************************************ 00:05:27.858 START TEST default_locks 00:05:27.858 ************************************ 00:05:27.858 03:49:02 -- common/autotest_common.sh@1114 -- # default_locks 00:05:27.858 03:49:02 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=57507 00:05:27.858 03:49:02 -- event/cpu_locks.sh@47 -- # waitforlisten 57507 00:05:27.858 03:49:02 -- common/autotest_common.sh@829 -- # '[' -z 57507 ']' 00:05:27.858 03:49:02 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.858 03:49:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.858 03:49:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.858 03:49:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.858 03:49:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.858 03:49:02 -- common/autotest_common.sh@10 -- # set +x 00:05:27.858 [2024-11-08 03:49:02.945811] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:27.858 [2024-11-08 03:49:02.945932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57507 ] 00:05:28.117 [2024-11-08 03:49:03.082694] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.117 [2024-11-08 03:49:03.186206] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:28.117 [2024-11-08 03:49:03.186364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.054 03:49:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.054 03:49:03 -- common/autotest_common.sh@862 -- # return 0 00:05:29.054 03:49:03 -- event/cpu_locks.sh@49 -- # locks_exist 57507 00:05:29.054 03:49:03 -- event/cpu_locks.sh@22 -- # lslocks -p 57507 00:05:29.054 03:49:03 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:29.324 03:49:04 -- event/cpu_locks.sh@50 -- # killprocess 57507 00:05:29.324 03:49:04 -- common/autotest_common.sh@936 -- # '[' -z 57507 ']' 00:05:29.324 03:49:04 -- common/autotest_common.sh@940 -- # kill -0 57507 00:05:29.325 03:49:04 -- common/autotest_common.sh@941 -- # uname 00:05:29.325 03:49:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:29.325 03:49:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57507 00:05:29.325 03:49:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:29.325 03:49:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:29.325 killing process with pid 57507 00:05:29.325 03:49:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57507' 00:05:29.325 03:49:04 -- common/autotest_common.sh@955 -- # kill 57507 00:05:29.325 03:49:04 -- common/autotest_common.sh@960 -- # wait 57507 00:05:29.595 03:49:04 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 57507 00:05:29.595 03:49:04 -- common/autotest_common.sh@650 -- # local es=0 00:05:29.595 03:49:04 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57507 00:05:29.595 03:49:04 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:29.595 03:49:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:29.595 03:49:04 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:29.595 03:49:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:29.595 03:49:04 -- common/autotest_common.sh@653 -- # waitforlisten 57507 00:05:29.595 03:49:04 -- common/autotest_common.sh@829 -- # '[' -z 57507 ']' 00:05:29.595 03:49:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.595 03:49:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.595 03:49:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.595 03:49:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.595 03:49:04 -- common/autotest_common.sh@10 -- # set +x 00:05:29.595 ERROR: process (pid: 57507) is no longer running 00:05:29.595 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57507) - No such process 00:05:29.595 03:49:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.595 03:49:04 -- common/autotest_common.sh@862 -- # return 1 00:05:29.595 03:49:04 -- common/autotest_common.sh@653 -- # es=1 00:05:29.595 03:49:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:29.595 03:49:04 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:29.596 03:49:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:29.596 03:49:04 -- event/cpu_locks.sh@54 -- # no_locks 00:05:29.596 03:49:04 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:29.596 03:49:04 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:29.596 03:49:04 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:29.596 00:05:29.596 real 0m1.736s 00:05:29.596 user 0m1.859s 00:05:29.596 sys 0m0.475s 00:05:29.596 03:49:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:29.596 ************************************ 00:05:29.596 END TEST default_locks 00:05:29.596 ************************************ 00:05:29.596 03:49:04 -- common/autotest_common.sh@10 -- # set +x 00:05:29.596 03:49:04 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:29.596 03:49:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:29.596 03:49:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.596 03:49:04 -- common/autotest_common.sh@10 -- # set +x 00:05:29.596 ************************************ 00:05:29.596 START TEST default_locks_via_rpc 00:05:29.596 ************************************ 00:05:29.596 03:49:04 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:05:29.596 03:49:04 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=57571 00:05:29.596 03:49:04 -- event/cpu_locks.sh@63 -- # waitforlisten 57571 00:05:29.596 03:49:04 -- common/autotest_common.sh@829 -- # '[' -z 57571 ']' 00:05:29.596 03:49:04 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.596 03:49:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.596 03:49:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.596 03:49:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.596 03:49:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.596 03:49:04 -- common/autotest_common.sh@10 -- # set +x 00:05:29.855 [2024-11-08 03:49:04.718580] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:29.855 [2024-11-08 03:49:04.718689] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57571 ] 00:05:29.855 [2024-11-08 03:49:04.846035] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.855 [2024-11-08 03:49:04.957260] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:29.855 [2024-11-08 03:49:04.957472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.790 03:49:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.790 03:49:05 -- common/autotest_common.sh@862 -- # return 0 00:05:30.790 03:49:05 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:30.790 03:49:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.790 03:49:05 -- common/autotest_common.sh@10 -- # set +x 00:05:30.790 03:49:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.790 03:49:05 -- event/cpu_locks.sh@67 -- # no_locks 00:05:30.790 03:49:05 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:30.790 03:49:05 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:30.790 03:49:05 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:30.790 03:49:05 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:30.790 03:49:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.790 03:49:05 -- common/autotest_common.sh@10 -- # set +x 00:05:30.790 03:49:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.790 03:49:05 -- event/cpu_locks.sh@71 -- # locks_exist 57571 00:05:30.790 03:49:05 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:30.790 03:49:05 -- event/cpu_locks.sh@22 -- # lslocks -p 57571 00:05:31.048 03:49:06 -- event/cpu_locks.sh@73 -- # killprocess 57571 00:05:31.048 03:49:06 -- common/autotest_common.sh@936 -- # '[' -z 57571 ']' 00:05:31.048 03:49:06 -- common/autotest_common.sh@940 -- # kill -0 57571 00:05:31.048 03:49:06 -- common/autotest_common.sh@941 -- # uname 00:05:31.048 03:49:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:31.048 03:49:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57571 00:05:31.048 03:49:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:31.048 03:49:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:31.048 killing process with pid 57571 00:05:31.048 03:49:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57571' 00:05:31.048 03:49:06 -- common/autotest_common.sh@955 -- # kill 57571 00:05:31.048 03:49:06 -- common/autotest_common.sh@960 -- # wait 57571 00:05:31.614 00:05:31.614 real 0m2.025s 00:05:31.614 user 0m2.139s 00:05:31.614 sys 0m0.566s 00:05:31.614 03:49:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:31.614 03:49:06 -- common/autotest_common.sh@10 -- # set +x 00:05:31.614 ************************************ 00:05:31.614 END TEST default_locks_via_rpc 00:05:31.614 ************************************ 00:05:31.872 03:49:06 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:31.872 03:49:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:31.872 03:49:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.872 03:49:06 -- common/autotest_common.sh@10 -- # set +x 00:05:31.872 ************************************ 00:05:31.872 START TEST non_locking_app_on_locked_coremask 00:05:31.872 ************************************ 00:05:31.872 03:49:06 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:05:31.872 03:49:06 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=57640 00:05:31.872 03:49:06 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.872 03:49:06 -- event/cpu_locks.sh@81 -- # waitforlisten 57640 /var/tmp/spdk.sock 00:05:31.872 03:49:06 -- common/autotest_common.sh@829 -- # '[' -z 57640 ']' 00:05:31.872 03:49:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.872 03:49:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.872 03:49:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.872 03:49:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.872 03:49:06 -- common/autotest_common.sh@10 -- # set +x 00:05:31.872 [2024-11-08 03:49:06.790735] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:31.872 [2024-11-08 03:49:06.790826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57640 ] 00:05:31.872 [2024-11-08 03:49:06.921974] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.130 [2024-11-08 03:49:07.006683] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:32.130 [2024-11-08 03:49:07.006841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.697 03:49:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.697 03:49:07 -- common/autotest_common.sh@862 -- # return 0 00:05:32.697 03:49:07 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:32.697 03:49:07 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=57668 00:05:32.697 03:49:07 -- event/cpu_locks.sh@85 -- # waitforlisten 57668 /var/tmp/spdk2.sock 00:05:32.697 03:49:07 -- common/autotest_common.sh@829 -- # '[' -z 57668 ']' 00:05:32.697 03:49:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:32.697 03:49:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.697 03:49:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:32.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:32.697 03:49:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.697 03:49:07 -- common/autotest_common.sh@10 -- # set +x 00:05:32.697 [2024-11-08 03:49:07.787958] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:32.697 [2024-11-08 03:49:07.788060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57668 ] 00:05:32.956 [2024-11-08 03:49:07.924700] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:32.956 [2024-11-08 03:49:07.924771] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.215 [2024-11-08 03:49:08.099627] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:33.215 [2024-11-08 03:49:08.099790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.782 03:49:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.782 03:49:08 -- common/autotest_common.sh@862 -- # return 0 00:05:33.782 03:49:08 -- event/cpu_locks.sh@87 -- # locks_exist 57640 00:05:33.782 03:49:08 -- event/cpu_locks.sh@22 -- # lslocks -p 57640 00:05:33.782 03:49:08 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:34.718 03:49:09 -- event/cpu_locks.sh@89 -- # killprocess 57640 00:05:34.718 03:49:09 -- common/autotest_common.sh@936 -- # '[' -z 57640 ']' 00:05:34.718 03:49:09 -- common/autotest_common.sh@940 -- # kill -0 57640 00:05:34.718 03:49:09 -- common/autotest_common.sh@941 -- # uname 00:05:34.718 03:49:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:34.718 03:49:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57640 00:05:34.718 03:49:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:34.718 03:49:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:34.718 killing process with pid 57640 00:05:34.718 03:49:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57640' 00:05:34.718 03:49:09 -- common/autotest_common.sh@955 -- # kill 57640 00:05:34.718 03:49:09 -- common/autotest_common.sh@960 -- # wait 57640 00:05:36.093 03:49:10 -- event/cpu_locks.sh@90 -- # killprocess 57668 00:05:36.093 03:49:10 -- common/autotest_common.sh@936 -- # '[' -z 57668 ']' 00:05:36.093 03:49:10 -- common/autotest_common.sh@940 -- # kill -0 57668 00:05:36.093 03:49:10 -- common/autotest_common.sh@941 -- # uname 00:05:36.093 03:49:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:36.093 03:49:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57668 00:05:36.093 03:49:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:36.093 03:49:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:36.093 killing process with pid 57668 00:05:36.093 03:49:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57668' 00:05:36.093 03:49:10 -- common/autotest_common.sh@955 -- # kill 57668 00:05:36.093 03:49:10 -- common/autotest_common.sh@960 -- # wait 57668 00:05:36.351 00:05:36.351 real 0m4.618s 00:05:36.351 user 0m4.905s 00:05:36.351 sys 0m1.300s 00:05:36.351 03:49:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:36.351 03:49:11 -- common/autotest_common.sh@10 -- # set +x 00:05:36.351 ************************************ 00:05:36.351 END TEST non_locking_app_on_locked_coremask 00:05:36.351 ************************************ 00:05:36.351 03:49:11 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:36.351 03:49:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:36.351 03:49:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.351 03:49:11 -- common/autotest_common.sh@10 -- # set +x 00:05:36.351 ************************************ 00:05:36.351 START TEST locking_app_on_unlocked_coremask 00:05:36.351 ************************************ 00:05:36.351 03:49:11 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:05:36.351 03:49:11 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=57747 00:05:36.351 03:49:11 -- event/cpu_locks.sh@99 -- # waitforlisten 57747 /var/tmp/spdk.sock 00:05:36.351 03:49:11 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:36.351 03:49:11 -- common/autotest_common.sh@829 -- # '[' -z 57747 ']' 00:05:36.351 03:49:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.351 03:49:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.351 03:49:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.351 03:49:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.351 03:49:11 -- common/autotest_common.sh@10 -- # set +x 00:05:36.351 [2024-11-08 03:49:11.457054] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:36.351 [2024-11-08 03:49:11.457153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57747 ] 00:05:36.609 [2024-11-08 03:49:11.586407] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:36.609 [2024-11-08 03:49:11.586450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.609 [2024-11-08 03:49:11.693394] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:36.609 [2024-11-08 03:49:11.693574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.570 03:49:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.570 03:49:12 -- common/autotest_common.sh@862 -- # return 0 00:05:37.570 03:49:12 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=57779 00:05:37.570 03:49:12 -- event/cpu_locks.sh@103 -- # waitforlisten 57779 /var/tmp/spdk2.sock 00:05:37.570 03:49:12 -- common/autotest_common.sh@829 -- # '[' -z 57779 ']' 00:05:37.570 03:49:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:37.570 03:49:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.570 03:49:12 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:37.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:37.570 03:49:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:37.570 03:49:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.570 03:49:12 -- common/autotest_common.sh@10 -- # set +x 00:05:37.570 [2024-11-08 03:49:12.544538] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:37.570 [2024-11-08 03:49:12.544672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57779 ] 00:05:37.828 [2024-11-08 03:49:12.689762] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.086 [2024-11-08 03:49:12.946410] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:38.086 [2024-11-08 03:49:12.950617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.459 03:49:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.459 03:49:14 -- common/autotest_common.sh@862 -- # return 0 00:05:39.459 03:49:14 -- event/cpu_locks.sh@105 -- # locks_exist 57779 00:05:39.459 03:49:14 -- event/cpu_locks.sh@22 -- # lslocks -p 57779 00:05:39.459 03:49:14 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:40.025 03:49:15 -- event/cpu_locks.sh@107 -- # killprocess 57747 00:05:40.025 03:49:15 -- common/autotest_common.sh@936 -- # '[' -z 57747 ']' 00:05:40.025 03:49:15 -- common/autotest_common.sh@940 -- # kill -0 57747 00:05:40.025 03:49:15 -- common/autotest_common.sh@941 -- # uname 00:05:40.025 03:49:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:40.025 03:49:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57747 00:05:40.025 03:49:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:40.025 03:49:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:40.025 killing process with pid 57747 00:05:40.025 03:49:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57747' 00:05:40.025 03:49:15 -- common/autotest_common.sh@955 -- # kill 57747 00:05:40.025 03:49:15 -- common/autotest_common.sh@960 -- # wait 57747 00:05:41.399 03:49:16 -- event/cpu_locks.sh@108 -- # killprocess 57779 00:05:41.400 03:49:16 -- common/autotest_common.sh@936 -- # '[' -z 57779 ']' 00:05:41.400 03:49:16 -- common/autotest_common.sh@940 -- # kill -0 57779 00:05:41.400 03:49:16 -- common/autotest_common.sh@941 -- # uname 00:05:41.400 03:49:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:41.400 03:49:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57779 00:05:41.400 03:49:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:41.400 03:49:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:41.400 killing process with pid 57779 00:05:41.400 03:49:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57779' 00:05:41.400 03:49:16 -- common/autotest_common.sh@955 -- # kill 57779 00:05:41.400 03:49:16 -- common/autotest_common.sh@960 -- # wait 57779 00:05:41.966 00:05:41.966 real 0m5.418s 00:05:41.966 user 0m5.988s 00:05:41.966 sys 0m1.314s 00:05:41.966 03:49:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:41.966 ************************************ 00:05:41.966 END TEST locking_app_on_unlocked_coremask 00:05:41.966 03:49:16 -- common/autotest_common.sh@10 -- # set +x 00:05:41.966 ************************************ 00:05:41.966 03:49:16 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:41.966 03:49:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:41.966 03:49:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.966 03:49:16 -- common/autotest_common.sh@10 -- # set +x 00:05:41.966 ************************************ 00:05:41.966 START TEST locking_app_on_locked_coremask 00:05:41.966 ************************************ 00:05:41.966 03:49:16 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:05:41.966 03:49:16 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=57879 00:05:41.966 03:49:16 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:41.966 03:49:16 -- event/cpu_locks.sh@116 -- # waitforlisten 57879 /var/tmp/spdk.sock 00:05:41.966 03:49:16 -- common/autotest_common.sh@829 -- # '[' -z 57879 ']' 00:05:41.966 03:49:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.966 03:49:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.966 03:49:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.966 03:49:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.966 03:49:16 -- common/autotest_common.sh@10 -- # set +x 00:05:41.966 [2024-11-08 03:49:16.917631] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:41.966 [2024-11-08 03:49:16.917720] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57879 ] 00:05:41.966 [2024-11-08 03:49:17.040794] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.224 [2024-11-08 03:49:17.148803] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:42.224 [2024-11-08 03:49:17.148976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.158 03:49:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.158 03:49:17 -- common/autotest_common.sh@862 -- # return 0 00:05:43.158 03:49:17 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=57912 00:05:43.158 03:49:17 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 57912 /var/tmp/spdk2.sock 00:05:43.158 03:49:17 -- common/autotest_common.sh@650 -- # local es=0 00:05:43.158 03:49:17 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57912 /var/tmp/spdk2.sock 00:05:43.158 03:49:17 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:43.158 03:49:17 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:43.158 03:49:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:43.158 03:49:17 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:43.158 03:49:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:43.158 03:49:17 -- common/autotest_common.sh@653 -- # waitforlisten 57912 /var/tmp/spdk2.sock 00:05:43.158 03:49:17 -- common/autotest_common.sh@829 -- # '[' -z 57912 ']' 00:05:43.158 03:49:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:43.158 03:49:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.158 03:49:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:43.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:43.158 03:49:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.158 03:49:17 -- common/autotest_common.sh@10 -- # set +x 00:05:43.158 [2024-11-08 03:49:18.002745] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:43.158 [2024-11-08 03:49:18.002887] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57912 ] 00:05:43.158 [2024-11-08 03:49:18.143987] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 57879 has claimed it. 00:05:43.158 [2024-11-08 03:49:18.144058] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:43.724 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57912) - No such process 00:05:43.724 ERROR: process (pid: 57912) is no longer running 00:05:43.724 03:49:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.724 03:49:18 -- common/autotest_common.sh@862 -- # return 1 00:05:43.724 03:49:18 -- common/autotest_common.sh@653 -- # es=1 00:05:43.724 03:49:18 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:43.724 03:49:18 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:43.724 03:49:18 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:43.724 03:49:18 -- event/cpu_locks.sh@122 -- # locks_exist 57879 00:05:43.724 03:49:18 -- event/cpu_locks.sh@22 -- # lslocks -p 57879 00:05:43.724 03:49:18 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:44.394 03:49:19 -- event/cpu_locks.sh@124 -- # killprocess 57879 00:05:44.394 03:49:19 -- common/autotest_common.sh@936 -- # '[' -z 57879 ']' 00:05:44.394 03:49:19 -- common/autotest_common.sh@940 -- # kill -0 57879 00:05:44.394 03:49:19 -- common/autotest_common.sh@941 -- # uname 00:05:44.394 03:49:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:44.394 03:49:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57879 00:05:44.394 03:49:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:44.394 killing process with pid 57879 00:05:44.394 03:49:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:44.394 03:49:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57879' 00:05:44.394 03:49:19 -- common/autotest_common.sh@955 -- # kill 57879 00:05:44.394 03:49:19 -- common/autotest_common.sh@960 -- # wait 57879 00:05:44.967 00:05:44.967 real 0m2.900s 00:05:44.967 user 0m3.323s 00:05:44.967 sys 0m0.708s 00:05:44.967 03:49:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:44.967 ************************************ 00:05:44.967 END TEST locking_app_on_locked_coremask 00:05:44.967 ************************************ 00:05:44.967 03:49:19 -- common/autotest_common.sh@10 -- # set +x 00:05:44.967 03:49:19 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:44.967 03:49:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:44.967 03:49:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.967 03:49:19 -- common/autotest_common.sh@10 -- # set +x 00:05:44.967 ************************************ 00:05:44.967 START TEST locking_overlapped_coremask 00:05:44.967 ************************************ 00:05:44.967 03:49:19 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:05:44.967 03:49:19 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=57963 00:05:44.967 03:49:19 -- event/cpu_locks.sh@133 -- # waitforlisten 57963 /var/tmp/spdk.sock 00:05:44.967 03:49:19 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:44.967 03:49:19 -- common/autotest_common.sh@829 -- # '[' -z 57963 ']' 00:05:44.967 03:49:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.967 03:49:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.967 03:49:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.968 03:49:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.968 03:49:19 -- common/autotest_common.sh@10 -- # set +x 00:05:44.968 [2024-11-08 03:49:19.892102] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:44.968 [2024-11-08 03:49:19.892235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57963 ] 00:05:44.968 [2024-11-08 03:49:20.033003] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:45.226 [2024-11-08 03:49:20.134494] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:45.226 [2024-11-08 03:49:20.134796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.226 [2024-11-08 03:49:20.134929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.226 [2024-11-08 03:49:20.134935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.162 03:49:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.162 03:49:20 -- common/autotest_common.sh@862 -- # return 0 00:05:46.162 03:49:20 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=57993 00:05:46.162 03:49:20 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:46.162 03:49:20 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 57993 /var/tmp/spdk2.sock 00:05:46.162 03:49:20 -- common/autotest_common.sh@650 -- # local es=0 00:05:46.162 03:49:20 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57993 /var/tmp/spdk2.sock 00:05:46.162 03:49:20 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:46.162 03:49:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.162 03:49:20 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:46.162 03:49:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.162 03:49:20 -- common/autotest_common.sh@653 -- # waitforlisten 57993 /var/tmp/spdk2.sock 00:05:46.162 03:49:20 -- common/autotest_common.sh@829 -- # '[' -z 57993 ']' 00:05:46.162 03:49:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:46.162 03:49:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:46.162 03:49:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:46.162 03:49:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.162 03:49:20 -- common/autotest_common.sh@10 -- # set +x 00:05:46.162 [2024-11-08 03:49:20.959413] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:46.162 [2024-11-08 03:49:20.960086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57993 ] 00:05:46.162 [2024-11-08 03:49:21.097477] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 57963 has claimed it. 00:05:46.162 [2024-11-08 03:49:21.097550] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:46.729 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57993) - No such process 00:05:46.729 ERROR: process (pid: 57993) is no longer running 00:05:46.729 03:49:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.729 03:49:21 -- common/autotest_common.sh@862 -- # return 1 00:05:46.729 03:49:21 -- common/autotest_common.sh@653 -- # es=1 00:05:46.729 03:49:21 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:46.729 03:49:21 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:46.729 03:49:21 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:46.729 03:49:21 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:46.729 03:49:21 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:46.729 03:49:21 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:46.729 03:49:21 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:46.729 03:49:21 -- event/cpu_locks.sh@141 -- # killprocess 57963 00:05:46.729 03:49:21 -- common/autotest_common.sh@936 -- # '[' -z 57963 ']' 00:05:46.729 03:49:21 -- common/autotest_common.sh@940 -- # kill -0 57963 00:05:46.729 03:49:21 -- common/autotest_common.sh@941 -- # uname 00:05:46.729 03:49:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:46.729 03:49:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57963 00:05:46.729 03:49:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:46.729 03:49:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:46.729 killing process with pid 57963 00:05:46.729 03:49:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57963' 00:05:46.729 03:49:21 -- common/autotest_common.sh@955 -- # kill 57963 00:05:46.729 03:49:21 -- common/autotest_common.sh@960 -- # wait 57963 00:05:47.296 00:05:47.296 real 0m2.458s 00:05:47.296 user 0m6.751s 00:05:47.296 sys 0m0.525s 00:05:47.296 03:49:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:47.296 03:49:22 -- common/autotest_common.sh@10 -- # set +x 00:05:47.296 ************************************ 00:05:47.296 END TEST locking_overlapped_coremask 00:05:47.296 ************************************ 00:05:47.296 03:49:22 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:47.296 03:49:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:47.296 03:49:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:47.296 03:49:22 -- common/autotest_common.sh@10 -- # set +x 00:05:47.296 ************************************ 00:05:47.296 START TEST locking_overlapped_coremask_via_rpc 00:05:47.296 ************************************ 00:05:47.296 03:49:22 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:05:47.296 03:49:22 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58045 00:05:47.296 03:49:22 -- event/cpu_locks.sh@149 -- # waitforlisten 58045 /var/tmp/spdk.sock 00:05:47.296 03:49:22 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:47.296 03:49:22 -- common/autotest_common.sh@829 -- # '[' -z 58045 ']' 00:05:47.296 03:49:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.296 03:49:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.296 03:49:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.296 03:49:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.296 03:49:22 -- common/autotest_common.sh@10 -- # set +x 00:05:47.296 [2024-11-08 03:49:22.391680] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:47.296 [2024-11-08 03:49:22.391762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58045 ] 00:05:47.554 [2024-11-08 03:49:22.522479] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:47.554 [2024-11-08 03:49:22.522513] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:47.554 [2024-11-08 03:49:22.630859] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:47.554 [2024-11-08 03:49:22.631180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.554 [2024-11-08 03:49:22.631300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:47.554 [2024-11-08 03:49:22.631309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.486 03:49:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.486 03:49:23 -- common/autotest_common.sh@862 -- # return 0 00:05:48.486 03:49:23 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58075 00:05:48.486 03:49:23 -- event/cpu_locks.sh@153 -- # waitforlisten 58075 /var/tmp/spdk2.sock 00:05:48.486 03:49:23 -- common/autotest_common.sh@829 -- # '[' -z 58075 ']' 00:05:48.486 03:49:23 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:48.486 03:49:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:48.486 03:49:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.486 03:49:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:48.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:48.486 03:49:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.486 03:49:23 -- common/autotest_common.sh@10 -- # set +x 00:05:48.486 [2024-11-08 03:49:23.469282] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:48.486 [2024-11-08 03:49:23.469466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58075 ] 00:05:48.743 [2024-11-08 03:49:23.614285] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:48.744 [2024-11-08 03:49:23.614335] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:48.744 [2024-11-08 03:49:23.850099] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:48.744 [2024-11-08 03:49:23.850444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:49.002 [2024-11-08 03:49:23.853492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:49.002 [2024-11-08 03:49:23.853496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.568 03:49:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.568 03:49:24 -- common/autotest_common.sh@862 -- # return 0 00:05:49.568 03:49:24 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:49.568 03:49:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.568 03:49:24 -- common/autotest_common.sh@10 -- # set +x 00:05:49.568 03:49:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.568 03:49:24 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:49.568 03:49:24 -- common/autotest_common.sh@650 -- # local es=0 00:05:49.568 03:49:24 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:49.568 03:49:24 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:49.568 03:49:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:49.568 03:49:24 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:49.568 03:49:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:49.568 03:49:24 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:49.568 03:49:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.568 03:49:24 -- common/autotest_common.sh@10 -- # set +x 00:05:49.568 [2024-11-08 03:49:24.506583] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58045 has claimed it. 00:05:49.568 2024/11/08 03:49:24 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:05:49.568 request: 00:05:49.568 { 00:05:49.568 "method": "framework_enable_cpumask_locks", 00:05:49.568 "params": {} 00:05:49.568 } 00:05:49.568 Got JSON-RPC error response 00:05:49.568 GoRPCClient: error on JSON-RPC call 00:05:49.568 03:49:24 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:49.568 03:49:24 -- common/autotest_common.sh@653 -- # es=1 00:05:49.568 03:49:24 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:49.568 03:49:24 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:49.568 03:49:24 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:49.568 03:49:24 -- event/cpu_locks.sh@158 -- # waitforlisten 58045 /var/tmp/spdk.sock 00:05:49.568 03:49:24 -- common/autotest_common.sh@829 -- # '[' -z 58045 ']' 00:05:49.568 03:49:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.568 03:49:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.568 03:49:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.568 03:49:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.568 03:49:24 -- common/autotest_common.sh@10 -- # set +x 00:05:49.826 03:49:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.826 03:49:24 -- common/autotest_common.sh@862 -- # return 0 00:05:49.826 03:49:24 -- event/cpu_locks.sh@159 -- # waitforlisten 58075 /var/tmp/spdk2.sock 00:05:49.826 03:49:24 -- common/autotest_common.sh@829 -- # '[' -z 58075 ']' 00:05:49.826 03:49:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:49.826 03:49:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.826 03:49:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:49.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:49.826 03:49:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.826 03:49:24 -- common/autotest_common.sh@10 -- # set +x 00:05:50.084 03:49:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.084 03:49:25 -- common/autotest_common.sh@862 -- # return 0 00:05:50.084 03:49:25 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:50.084 03:49:25 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:50.084 03:49:25 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:50.084 03:49:25 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:50.084 00:05:50.084 real 0m2.763s 00:05:50.084 user 0m1.436s 00:05:50.084 sys 0m0.252s 00:05:50.084 03:49:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:50.084 03:49:25 -- common/autotest_common.sh@10 -- # set +x 00:05:50.084 ************************************ 00:05:50.084 END TEST locking_overlapped_coremask_via_rpc 00:05:50.084 ************************************ 00:05:50.084 03:49:25 -- event/cpu_locks.sh@174 -- # cleanup 00:05:50.084 03:49:25 -- event/cpu_locks.sh@15 -- # [[ -z 58045 ]] 00:05:50.084 03:49:25 -- event/cpu_locks.sh@15 -- # killprocess 58045 00:05:50.084 03:49:25 -- common/autotest_common.sh@936 -- # '[' -z 58045 ']' 00:05:50.084 03:49:25 -- common/autotest_common.sh@940 -- # kill -0 58045 00:05:50.084 03:49:25 -- common/autotest_common.sh@941 -- # uname 00:05:50.084 03:49:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:50.084 03:49:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58045 00:05:50.084 03:49:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:50.084 killing process with pid 58045 00:05:50.084 03:49:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:50.084 03:49:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58045' 00:05:50.084 03:49:25 -- common/autotest_common.sh@955 -- # kill 58045 00:05:50.084 03:49:25 -- common/autotest_common.sh@960 -- # wait 58045 00:05:51.019 03:49:25 -- event/cpu_locks.sh@16 -- # [[ -z 58075 ]] 00:05:51.019 03:49:25 -- event/cpu_locks.sh@16 -- # killprocess 58075 00:05:51.019 03:49:25 -- common/autotest_common.sh@936 -- # '[' -z 58075 ']' 00:05:51.019 03:49:25 -- common/autotest_common.sh@940 -- # kill -0 58075 00:05:51.019 03:49:25 -- common/autotest_common.sh@941 -- # uname 00:05:51.019 03:49:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:51.019 03:49:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58075 00:05:51.019 03:49:25 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:51.019 killing process with pid 58075 00:05:51.019 03:49:25 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:51.019 03:49:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58075' 00:05:51.019 03:49:25 -- common/autotest_common.sh@955 -- # kill 58075 00:05:51.019 03:49:25 -- common/autotest_common.sh@960 -- # wait 58075 00:05:51.586 03:49:26 -- event/cpu_locks.sh@18 -- # rm -f 00:05:51.586 03:49:26 -- event/cpu_locks.sh@1 -- # cleanup 00:05:51.586 03:49:26 -- event/cpu_locks.sh@15 -- # [[ -z 58045 ]] 00:05:51.586 03:49:26 -- event/cpu_locks.sh@15 -- # killprocess 58045 00:05:51.586 03:49:26 -- common/autotest_common.sh@936 -- # '[' -z 58045 ']' 00:05:51.586 03:49:26 -- common/autotest_common.sh@940 -- # kill -0 58045 00:05:51.586 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (58045) - No such process 00:05:51.586 Process with pid 58045 is not found 00:05:51.586 03:49:26 -- common/autotest_common.sh@963 -- # echo 'Process with pid 58045 is not found' 00:05:51.586 03:49:26 -- event/cpu_locks.sh@16 -- # [[ -z 58075 ]] 00:05:51.586 03:49:26 -- event/cpu_locks.sh@16 -- # killprocess 58075 00:05:51.586 03:49:26 -- common/autotest_common.sh@936 -- # '[' -z 58075 ']' 00:05:51.586 03:49:26 -- common/autotest_common.sh@940 -- # kill -0 58075 00:05:51.586 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (58075) - No such process 00:05:51.586 Process with pid 58075 is not found 00:05:51.586 03:49:26 -- common/autotest_common.sh@963 -- # echo 'Process with pid 58075 is not found' 00:05:51.586 03:49:26 -- event/cpu_locks.sh@18 -- # rm -f 00:05:51.586 00:05:51.586 real 0m23.761s 00:05:51.586 user 0m40.865s 00:05:51.586 sys 0m6.192s 00:05:51.586 03:49:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:51.586 03:49:26 -- common/autotest_common.sh@10 -- # set +x 00:05:51.586 ************************************ 00:05:51.586 END TEST cpu_locks 00:05:51.586 ************************************ 00:05:51.586 00:05:51.586 real 0m51.504s 00:05:51.586 user 1m36.097s 00:05:51.586 sys 0m10.037s 00:05:51.586 03:49:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:51.586 03:49:26 -- common/autotest_common.sh@10 -- # set +x 00:05:51.586 ************************************ 00:05:51.586 END TEST event 00:05:51.586 ************************************ 00:05:51.586 03:49:26 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:51.586 03:49:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:51.586 03:49:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.586 03:49:26 -- common/autotest_common.sh@10 -- # set +x 00:05:51.586 ************************************ 00:05:51.586 START TEST thread 00:05:51.586 ************************************ 00:05:51.586 03:49:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:51.586 * Looking for test storage... 00:05:51.586 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:51.586 03:49:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:51.586 03:49:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:51.586 03:49:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:51.845 03:49:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:51.845 03:49:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:51.845 03:49:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:51.845 03:49:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:51.845 03:49:26 -- scripts/common.sh@335 -- # IFS=.-: 00:05:51.845 03:49:26 -- scripts/common.sh@335 -- # read -ra ver1 00:05:51.845 03:49:26 -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.845 03:49:26 -- scripts/common.sh@336 -- # read -ra ver2 00:05:51.845 03:49:26 -- scripts/common.sh@337 -- # local 'op=<' 00:05:51.845 03:49:26 -- scripts/common.sh@339 -- # ver1_l=2 00:05:51.845 03:49:26 -- scripts/common.sh@340 -- # ver2_l=1 00:05:51.845 03:49:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:51.845 03:49:26 -- scripts/common.sh@343 -- # case "$op" in 00:05:51.845 03:49:26 -- scripts/common.sh@344 -- # : 1 00:05:51.845 03:49:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:51.845 03:49:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.845 03:49:26 -- scripts/common.sh@364 -- # decimal 1 00:05:51.845 03:49:26 -- scripts/common.sh@352 -- # local d=1 00:05:51.845 03:49:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.845 03:49:26 -- scripts/common.sh@354 -- # echo 1 00:05:51.845 03:49:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:51.845 03:49:26 -- scripts/common.sh@365 -- # decimal 2 00:05:51.845 03:49:26 -- scripts/common.sh@352 -- # local d=2 00:05:51.845 03:49:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.845 03:49:26 -- scripts/common.sh@354 -- # echo 2 00:05:51.845 03:49:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:51.845 03:49:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:51.845 03:49:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:51.845 03:49:26 -- scripts/common.sh@367 -- # return 0 00:05:51.845 03:49:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.845 03:49:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:51.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.845 --rc genhtml_branch_coverage=1 00:05:51.845 --rc genhtml_function_coverage=1 00:05:51.845 --rc genhtml_legend=1 00:05:51.845 --rc geninfo_all_blocks=1 00:05:51.845 --rc geninfo_unexecuted_blocks=1 00:05:51.845 00:05:51.845 ' 00:05:51.845 03:49:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:51.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.845 --rc genhtml_branch_coverage=1 00:05:51.845 --rc genhtml_function_coverage=1 00:05:51.845 --rc genhtml_legend=1 00:05:51.845 --rc geninfo_all_blocks=1 00:05:51.845 --rc geninfo_unexecuted_blocks=1 00:05:51.845 00:05:51.845 ' 00:05:51.845 03:49:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:51.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.845 --rc genhtml_branch_coverage=1 00:05:51.845 --rc genhtml_function_coverage=1 00:05:51.845 --rc genhtml_legend=1 00:05:51.845 --rc geninfo_all_blocks=1 00:05:51.845 --rc geninfo_unexecuted_blocks=1 00:05:51.845 00:05:51.845 ' 00:05:51.845 03:49:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:51.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.845 --rc genhtml_branch_coverage=1 00:05:51.845 --rc genhtml_function_coverage=1 00:05:51.845 --rc genhtml_legend=1 00:05:51.845 --rc geninfo_all_blocks=1 00:05:51.845 --rc geninfo_unexecuted_blocks=1 00:05:51.845 00:05:51.845 ' 00:05:51.845 03:49:26 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:51.845 03:49:26 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:51.845 03:49:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.845 03:49:26 -- common/autotest_common.sh@10 -- # set +x 00:05:51.845 ************************************ 00:05:51.845 START TEST thread_poller_perf 00:05:51.845 ************************************ 00:05:51.845 03:49:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:51.845 [2024-11-08 03:49:26.744463] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:51.845 [2024-11-08 03:49:26.744590] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58234 ] 00:05:51.845 [2024-11-08 03:49:26.885661] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.104 [2024-11-08 03:49:27.047446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.104 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:53.479 [2024-11-08T03:49:28.590Z] ====================================== 00:05:53.479 [2024-11-08T03:49:28.590Z] busy:2211565372 (cyc) 00:05:53.479 [2024-11-08T03:49:28.590Z] total_run_count: 332000 00:05:53.479 [2024-11-08T03:49:28.590Z] tsc_hz: 2200000000 (cyc) 00:05:53.479 [2024-11-08T03:49:28.590Z] ====================================== 00:05:53.479 [2024-11-08T03:49:28.590Z] poller_cost: 6661 (cyc), 3027 (nsec) 00:05:53.479 00:05:53.479 real 0m1.487s 00:05:53.479 user 0m1.310s 00:05:53.479 sys 0m0.067s 00:05:53.479 03:49:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:53.479 ************************************ 00:05:53.479 END TEST thread_poller_perf 00:05:53.479 ************************************ 00:05:53.479 03:49:28 -- common/autotest_common.sh@10 -- # set +x 00:05:53.479 03:49:28 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:53.479 03:49:28 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:53.479 03:49:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:53.479 03:49:28 -- common/autotest_common.sh@10 -- # set +x 00:05:53.479 ************************************ 00:05:53.479 START TEST thread_poller_perf 00:05:53.479 ************************************ 00:05:53.479 03:49:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:53.479 [2024-11-08 03:49:28.278063] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:53.479 [2024-11-08 03:49:28.278162] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58275 ] 00:05:53.479 [2024-11-08 03:49:28.415725] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.479 [2024-11-08 03:49:28.552147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.479 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:54.854 [2024-11-08T03:49:29.965Z] ====================================== 00:05:54.854 [2024-11-08T03:49:29.965Z] busy:2202719652 (cyc) 00:05:54.854 [2024-11-08T03:49:29.965Z] total_run_count: 4766000 00:05:54.854 [2024-11-08T03:49:29.965Z] tsc_hz: 2200000000 (cyc) 00:05:54.854 [2024-11-08T03:49:29.965Z] ====================================== 00:05:54.854 [2024-11-08T03:49:29.965Z] poller_cost: 462 (cyc), 210 (nsec) 00:05:54.854 00:05:54.854 real 0m1.437s 00:05:54.854 user 0m1.257s 00:05:54.854 sys 0m0.072s 00:05:54.854 03:49:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:54.854 ************************************ 00:05:54.854 END TEST thread_poller_perf 00:05:54.854 ************************************ 00:05:54.854 03:49:29 -- common/autotest_common.sh@10 -- # set +x 00:05:54.854 03:49:29 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:54.854 00:05:54.854 real 0m3.206s 00:05:54.854 user 0m2.697s 00:05:54.854 sys 0m0.290s 00:05:54.854 03:49:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:54.854 03:49:29 -- common/autotest_common.sh@10 -- # set +x 00:05:54.854 ************************************ 00:05:54.854 END TEST thread 00:05:54.854 ************************************ 00:05:54.854 03:49:29 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:54.854 03:49:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:54.854 03:49:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.855 03:49:29 -- common/autotest_common.sh@10 -- # set +x 00:05:54.855 ************************************ 00:05:54.855 START TEST accel 00:05:54.855 ************************************ 00:05:54.855 03:49:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:54.855 * Looking for test storage... 00:05:54.855 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:54.855 03:49:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:54.855 03:49:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:54.855 03:49:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:54.855 03:49:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:54.855 03:49:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:54.855 03:49:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:55.113 03:49:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:55.113 03:49:29 -- scripts/common.sh@335 -- # IFS=.-: 00:05:55.113 03:49:29 -- scripts/common.sh@335 -- # read -ra ver1 00:05:55.113 03:49:29 -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.113 03:49:29 -- scripts/common.sh@336 -- # read -ra ver2 00:05:55.113 03:49:29 -- scripts/common.sh@337 -- # local 'op=<' 00:05:55.113 03:49:29 -- scripts/common.sh@339 -- # ver1_l=2 00:05:55.113 03:49:29 -- scripts/common.sh@340 -- # ver2_l=1 00:05:55.113 03:49:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:55.113 03:49:29 -- scripts/common.sh@343 -- # case "$op" in 00:05:55.113 03:49:29 -- scripts/common.sh@344 -- # : 1 00:05:55.113 03:49:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:55.113 03:49:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.113 03:49:29 -- scripts/common.sh@364 -- # decimal 1 00:05:55.113 03:49:29 -- scripts/common.sh@352 -- # local d=1 00:05:55.113 03:49:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.113 03:49:29 -- scripts/common.sh@354 -- # echo 1 00:05:55.113 03:49:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:55.113 03:49:29 -- scripts/common.sh@365 -- # decimal 2 00:05:55.113 03:49:29 -- scripts/common.sh@352 -- # local d=2 00:05:55.113 03:49:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.113 03:49:29 -- scripts/common.sh@354 -- # echo 2 00:05:55.113 03:49:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:55.113 03:49:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:55.113 03:49:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:55.113 03:49:29 -- scripts/common.sh@367 -- # return 0 00:05:55.113 03:49:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.113 03:49:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:55.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.113 --rc genhtml_branch_coverage=1 00:05:55.113 --rc genhtml_function_coverage=1 00:05:55.113 --rc genhtml_legend=1 00:05:55.113 --rc geninfo_all_blocks=1 00:05:55.113 --rc geninfo_unexecuted_blocks=1 00:05:55.113 00:05:55.113 ' 00:05:55.113 03:49:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:55.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.113 --rc genhtml_branch_coverage=1 00:05:55.113 --rc genhtml_function_coverage=1 00:05:55.113 --rc genhtml_legend=1 00:05:55.113 --rc geninfo_all_blocks=1 00:05:55.113 --rc geninfo_unexecuted_blocks=1 00:05:55.113 00:05:55.113 ' 00:05:55.113 03:49:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:55.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.113 --rc genhtml_branch_coverage=1 00:05:55.113 --rc genhtml_function_coverage=1 00:05:55.113 --rc genhtml_legend=1 00:05:55.113 --rc geninfo_all_blocks=1 00:05:55.113 --rc geninfo_unexecuted_blocks=1 00:05:55.113 00:05:55.113 ' 00:05:55.113 03:49:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:55.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.113 --rc genhtml_branch_coverage=1 00:05:55.113 --rc genhtml_function_coverage=1 00:05:55.113 --rc genhtml_legend=1 00:05:55.113 --rc geninfo_all_blocks=1 00:05:55.113 --rc geninfo_unexecuted_blocks=1 00:05:55.113 00:05:55.113 ' 00:05:55.113 03:49:29 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:05:55.113 03:49:29 -- accel/accel.sh@74 -- # get_expected_opcs 00:05:55.113 03:49:29 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:55.113 03:49:29 -- accel/accel.sh@59 -- # spdk_tgt_pid=58351 00:05:55.113 03:49:29 -- accel/accel.sh@60 -- # waitforlisten 58351 00:05:55.113 03:49:29 -- common/autotest_common.sh@829 -- # '[' -z 58351 ']' 00:05:55.113 03:49:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.113 03:49:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.113 03:49:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.113 03:49:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.113 03:49:29 -- common/autotest_common.sh@10 -- # set +x 00:05:55.113 03:49:29 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:55.113 03:49:29 -- accel/accel.sh@58 -- # build_accel_config 00:05:55.113 03:49:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:55.113 03:49:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.113 03:49:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.113 03:49:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:55.113 03:49:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:55.113 03:49:29 -- accel/accel.sh@41 -- # local IFS=, 00:05:55.113 03:49:29 -- accel/accel.sh@42 -- # jq -r . 00:05:55.113 [2024-11-08 03:49:30.049144] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:55.113 [2024-11-08 03:49:30.049262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58351 ] 00:05:55.113 [2024-11-08 03:49:30.194991] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.372 [2024-11-08 03:49:30.343590] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:55.372 [2024-11-08 03:49:30.343766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.938 03:49:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.938 03:49:31 -- common/autotest_common.sh@862 -- # return 0 00:05:55.938 03:49:31 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:55.938 03:49:31 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:05:55.938 03:49:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.938 03:49:31 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:55.938 03:49:31 -- common/autotest_common.sh@10 -- # set +x 00:05:56.196 03:49:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.196 03:49:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.196 03:49:31 -- accel/accel.sh@64 -- # IFS== 00:05:56.196 03:49:31 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.196 03:49:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.196 03:49:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.196 03:49:31 -- accel/accel.sh@64 -- # IFS== 00:05:56.196 03:49:31 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.196 03:49:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.196 03:49:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.196 03:49:31 -- accel/accel.sh@64 -- # IFS== 00:05:56.196 03:49:31 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.196 03:49:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.196 03:49:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.196 03:49:31 -- accel/accel.sh@64 -- # IFS== 00:05:56.196 03:49:31 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.196 03:49:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.196 03:49:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.196 03:49:31 -- accel/accel.sh@64 -- # IFS== 00:05:56.196 03:49:31 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.196 03:49:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.196 03:49:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.196 03:49:31 -- accel/accel.sh@64 -- # IFS== 00:05:56.196 03:49:31 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.196 03:49:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.196 03:49:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.196 03:49:31 -- accel/accel.sh@64 -- # IFS== 00:05:56.196 03:49:31 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.196 03:49:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.196 03:49:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.196 03:49:31 -- accel/accel.sh@64 -- # IFS== 00:05:56.196 03:49:31 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.196 03:49:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.196 03:49:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.196 03:49:31 -- accel/accel.sh@64 -- # IFS== 00:05:56.196 03:49:31 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.196 03:49:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.196 03:49:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.196 03:49:31 -- accel/accel.sh@64 -- # IFS== 00:05:56.196 03:49:31 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.196 03:49:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.196 03:49:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.196 03:49:31 -- accel/accel.sh@64 -- # IFS== 00:05:56.196 03:49:31 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.196 03:49:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.196 03:49:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.196 03:49:31 -- accel/accel.sh@64 -- # IFS== 00:05:56.196 03:49:31 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.196 03:49:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.196 03:49:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.196 03:49:31 -- accel/accel.sh@64 -- # IFS== 00:05:56.196 03:49:31 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.196 03:49:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.196 03:49:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.196 03:49:31 -- accel/accel.sh@64 -- # IFS== 00:05:56.196 03:49:31 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.196 03:49:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.196 03:49:31 -- accel/accel.sh@67 -- # killprocess 58351 00:05:56.196 03:49:31 -- common/autotest_common.sh@936 -- # '[' -z 58351 ']' 00:05:56.196 03:49:31 -- common/autotest_common.sh@940 -- # kill -0 58351 00:05:56.197 03:49:31 -- common/autotest_common.sh@941 -- # uname 00:05:56.197 03:49:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:56.197 03:49:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58351 00:05:56.197 03:49:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:56.197 03:49:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:56.197 killing process with pid 58351 00:05:56.197 03:49:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58351' 00:05:56.197 03:49:31 -- common/autotest_common.sh@955 -- # kill 58351 00:05:56.197 03:49:31 -- common/autotest_common.sh@960 -- # wait 58351 00:05:56.763 03:49:31 -- accel/accel.sh@68 -- # trap - ERR 00:05:56.763 03:49:31 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:05:56.763 03:49:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:56.763 03:49:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.763 03:49:31 -- common/autotest_common.sh@10 -- # set +x 00:05:56.763 03:49:31 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:05:56.763 03:49:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:56.763 03:49:31 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.763 03:49:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:56.763 03:49:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.763 03:49:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.763 03:49:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:56.763 03:49:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:56.763 03:49:31 -- accel/accel.sh@41 -- # local IFS=, 00:05:56.763 03:49:31 -- accel/accel.sh@42 -- # jq -r . 00:05:56.763 03:49:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:56.763 03:49:31 -- common/autotest_common.sh@10 -- # set +x 00:05:56.763 03:49:31 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:56.763 03:49:31 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:56.763 03:49:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.763 03:49:31 -- common/autotest_common.sh@10 -- # set +x 00:05:56.763 ************************************ 00:05:56.763 START TEST accel_missing_filename 00:05:56.763 ************************************ 00:05:56.763 03:49:31 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:05:56.763 03:49:31 -- common/autotest_common.sh@650 -- # local es=0 00:05:56.763 03:49:31 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:56.763 03:49:31 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:56.763 03:49:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.763 03:49:31 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:56.763 03:49:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.763 03:49:31 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:05:56.763 03:49:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:56.763 03:49:31 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.763 03:49:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:56.763 03:49:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.763 03:49:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.763 03:49:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:56.763 03:49:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:56.763 03:49:31 -- accel/accel.sh@41 -- # local IFS=, 00:05:56.763 03:49:31 -- accel/accel.sh@42 -- # jq -r . 00:05:56.763 [2024-11-08 03:49:31.838295] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:56.763 [2024-11-08 03:49:31.838384] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58426 ] 00:05:57.022 [2024-11-08 03:49:31.970931] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.022 [2024-11-08 03:49:32.107035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.280 [2024-11-08 03:49:32.181352] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:57.280 [2024-11-08 03:49:32.286611] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:57.538 A filename is required. 00:05:57.538 03:49:32 -- common/autotest_common.sh@653 -- # es=234 00:05:57.538 03:49:32 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:57.538 03:49:32 -- common/autotest_common.sh@662 -- # es=106 00:05:57.538 03:49:32 -- common/autotest_common.sh@663 -- # case "$es" in 00:05:57.538 03:49:32 -- common/autotest_common.sh@670 -- # es=1 00:05:57.538 03:49:32 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:57.538 00:05:57.538 real 0m0.604s 00:05:57.538 user 0m0.403s 00:05:57.538 sys 0m0.144s 00:05:57.538 03:49:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:57.538 03:49:32 -- common/autotest_common.sh@10 -- # set +x 00:05:57.538 ************************************ 00:05:57.538 END TEST accel_missing_filename 00:05:57.538 ************************************ 00:05:57.538 03:49:32 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:57.538 03:49:32 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:57.538 03:49:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.538 03:49:32 -- common/autotest_common.sh@10 -- # set +x 00:05:57.538 ************************************ 00:05:57.538 START TEST accel_compress_verify 00:05:57.538 ************************************ 00:05:57.538 03:49:32 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:57.538 03:49:32 -- common/autotest_common.sh@650 -- # local es=0 00:05:57.538 03:49:32 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:57.538 03:49:32 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:57.538 03:49:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.538 03:49:32 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:57.538 03:49:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.538 03:49:32 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:57.538 03:49:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:57.538 03:49:32 -- accel/accel.sh@12 -- # build_accel_config 00:05:57.538 03:49:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:57.538 03:49:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.538 03:49:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.538 03:49:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:57.538 03:49:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:57.538 03:49:32 -- accel/accel.sh@41 -- # local IFS=, 00:05:57.538 03:49:32 -- accel/accel.sh@42 -- # jq -r . 00:05:57.538 [2024-11-08 03:49:32.486385] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:57.538 [2024-11-08 03:49:32.486518] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58456 ] 00:05:57.538 [2024-11-08 03:49:32.623514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.797 [2024-11-08 03:49:32.699946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.797 [2024-11-08 03:49:32.773132] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:57.797 [2024-11-08 03:49:32.877281] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:58.056 00:05:58.056 Compression does not support the verify option, aborting. 00:05:58.056 03:49:33 -- common/autotest_common.sh@653 -- # es=161 00:05:58.056 03:49:33 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:58.056 03:49:33 -- common/autotest_common.sh@662 -- # es=33 00:05:58.056 03:49:33 -- common/autotest_common.sh@663 -- # case "$es" in 00:05:58.056 03:49:33 -- common/autotest_common.sh@670 -- # es=1 00:05:58.056 03:49:33 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:58.056 00:05:58.056 real 0m0.541s 00:05:58.056 user 0m0.351s 00:05:58.056 sys 0m0.132s 00:05:58.056 03:49:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:58.056 ************************************ 00:05:58.056 END TEST accel_compress_verify 00:05:58.056 03:49:33 -- common/autotest_common.sh@10 -- # set +x 00:05:58.056 ************************************ 00:05:58.056 03:49:33 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:58.056 03:49:33 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:58.056 03:49:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.056 03:49:33 -- common/autotest_common.sh@10 -- # set +x 00:05:58.056 ************************************ 00:05:58.056 START TEST accel_wrong_workload 00:05:58.056 ************************************ 00:05:58.056 03:49:33 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:05:58.056 03:49:33 -- common/autotest_common.sh@650 -- # local es=0 00:05:58.056 03:49:33 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:58.056 03:49:33 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:58.056 03:49:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.056 03:49:33 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:58.056 03:49:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.056 03:49:33 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:05:58.056 03:49:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:58.056 03:49:33 -- accel/accel.sh@12 -- # build_accel_config 00:05:58.056 03:49:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:58.056 03:49:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.056 03:49:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.056 03:49:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:58.056 03:49:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:58.056 03:49:33 -- accel/accel.sh@41 -- # local IFS=, 00:05:58.056 03:49:33 -- accel/accel.sh@42 -- # jq -r . 00:05:58.056 Unsupported workload type: foobar 00:05:58.056 [2024-11-08 03:49:33.082323] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:58.056 accel_perf options: 00:05:58.056 [-h help message] 00:05:58.056 [-q queue depth per core] 00:05:58.056 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:58.056 [-T number of threads per core 00:05:58.056 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:58.056 [-t time in seconds] 00:05:58.056 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:58.056 [ dif_verify, , dif_generate, dif_generate_copy 00:05:58.056 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:58.056 [-l for compress/decompress workloads, name of uncompressed input file 00:05:58.056 [-S for crc32c workload, use this seed value (default 0) 00:05:58.056 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:58.056 [-f for fill workload, use this BYTE value (default 255) 00:05:58.056 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:58.056 [-y verify result if this switch is on] 00:05:58.056 [-a tasks to allocate per core (default: same value as -q)] 00:05:58.056 Can be used to spread operations across a wider range of memory. 00:05:58.056 03:49:33 -- common/autotest_common.sh@653 -- # es=1 00:05:58.056 03:49:33 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:58.056 03:49:33 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:58.056 03:49:33 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:58.056 00:05:58.056 real 0m0.030s 00:05:58.056 user 0m0.024s 00:05:58.056 sys 0m0.006s 00:05:58.056 03:49:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:58.056 03:49:33 -- common/autotest_common.sh@10 -- # set +x 00:05:58.056 ************************************ 00:05:58.056 END TEST accel_wrong_workload 00:05:58.056 ************************************ 00:05:58.056 03:49:33 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:58.056 03:49:33 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:58.056 03:49:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.056 03:49:33 -- common/autotest_common.sh@10 -- # set +x 00:05:58.056 ************************************ 00:05:58.056 START TEST accel_negative_buffers 00:05:58.056 ************************************ 00:05:58.056 03:49:33 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:58.056 03:49:33 -- common/autotest_common.sh@650 -- # local es=0 00:05:58.056 03:49:33 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:58.056 03:49:33 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:58.056 03:49:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.056 03:49:33 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:58.056 03:49:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.056 03:49:33 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:05:58.056 03:49:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:58.056 03:49:33 -- accel/accel.sh@12 -- # build_accel_config 00:05:58.056 03:49:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:58.056 03:49:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.056 03:49:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.056 03:49:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:58.056 03:49:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:58.056 03:49:33 -- accel/accel.sh@41 -- # local IFS=, 00:05:58.056 03:49:33 -- accel/accel.sh@42 -- # jq -r . 00:05:58.056 -x option must be non-negative. 00:05:58.056 [2024-11-08 03:49:33.156890] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:58.056 accel_perf options: 00:05:58.056 [-h help message] 00:05:58.056 [-q queue depth per core] 00:05:58.056 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:58.056 [-T number of threads per core 00:05:58.056 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:58.056 [-t time in seconds] 00:05:58.056 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:58.056 [ dif_verify, , dif_generate, dif_generate_copy 00:05:58.056 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:58.056 [-l for compress/decompress workloads, name of uncompressed input file 00:05:58.056 [-S for crc32c workload, use this seed value (default 0) 00:05:58.056 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:58.056 [-f for fill workload, use this BYTE value (default 255) 00:05:58.056 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:58.056 [-y verify result if this switch is on] 00:05:58.056 [-a tasks to allocate per core (default: same value as -q)] 00:05:58.056 Can be used to spread operations across a wider range of memory. 00:05:58.056 03:49:33 -- common/autotest_common.sh@653 -- # es=1 00:05:58.056 03:49:33 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:58.056 03:49:33 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:58.056 03:49:33 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:58.056 00:05:58.056 real 0m0.026s 00:05:58.056 user 0m0.012s 00:05:58.056 sys 0m0.014s 00:05:58.056 03:49:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:58.056 ************************************ 00:05:58.056 END TEST accel_negative_buffers 00:05:58.056 ************************************ 00:05:58.056 03:49:33 -- common/autotest_common.sh@10 -- # set +x 00:05:58.315 03:49:33 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:58.315 03:49:33 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:58.315 03:49:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.315 03:49:33 -- common/autotest_common.sh@10 -- # set +x 00:05:58.316 ************************************ 00:05:58.316 START TEST accel_crc32c 00:05:58.316 ************************************ 00:05:58.316 03:49:33 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:58.316 03:49:33 -- accel/accel.sh@16 -- # local accel_opc 00:05:58.316 03:49:33 -- accel/accel.sh@17 -- # local accel_module 00:05:58.316 03:49:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:58.316 03:49:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:58.316 03:49:33 -- accel/accel.sh@12 -- # build_accel_config 00:05:58.316 03:49:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:58.316 03:49:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.316 03:49:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.316 03:49:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:58.316 03:49:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:58.316 03:49:33 -- accel/accel.sh@41 -- # local IFS=, 00:05:58.316 03:49:33 -- accel/accel.sh@42 -- # jq -r . 00:05:58.316 [2024-11-08 03:49:33.234064] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:58.316 [2024-11-08 03:49:33.234139] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58515 ] 00:05:58.316 [2024-11-08 03:49:33.371325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.574 [2024-11-08 03:49:33.458268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.952 03:49:34 -- accel/accel.sh@18 -- # out=' 00:05:59.952 SPDK Configuration: 00:05:59.952 Core mask: 0x1 00:05:59.952 00:05:59.952 Accel Perf Configuration: 00:05:59.952 Workload Type: crc32c 00:05:59.952 CRC-32C seed: 32 00:05:59.952 Transfer size: 4096 bytes 00:05:59.952 Vector count 1 00:05:59.952 Module: software 00:05:59.952 Queue depth: 32 00:05:59.952 Allocate depth: 32 00:05:59.952 # threads/core: 1 00:05:59.952 Run time: 1 seconds 00:05:59.952 Verify: Yes 00:05:59.952 00:05:59.952 Running for 1 seconds... 00:05:59.952 00:05:59.952 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:59.952 ------------------------------------------------------------------------------------ 00:05:59.952 0,0 568640/s 2221 MiB/s 0 0 00:05:59.952 ==================================================================================== 00:05:59.952 Total 568640/s 2221 MiB/s 0 0' 00:05:59.952 03:49:34 -- accel/accel.sh@20 -- # IFS=: 00:05:59.952 03:49:34 -- accel/accel.sh@20 -- # read -r var val 00:05:59.952 03:49:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:59.952 03:49:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:59.952 03:49:34 -- accel/accel.sh@12 -- # build_accel_config 00:05:59.952 03:49:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:59.952 03:49:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.952 03:49:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.952 03:49:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:59.952 03:49:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:59.952 03:49:34 -- accel/accel.sh@41 -- # local IFS=, 00:05:59.952 03:49:34 -- accel/accel.sh@42 -- # jq -r . 00:05:59.952 [2024-11-08 03:49:34.832747] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:59.952 [2024-11-08 03:49:34.833625] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58534 ] 00:05:59.952 [2024-11-08 03:49:34.975717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.212 [2024-11-08 03:49:35.123584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.212 03:49:35 -- accel/accel.sh@21 -- # val= 00:06:00.212 03:49:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.212 03:49:35 -- accel/accel.sh@20 -- # IFS=: 00:06:00.212 03:49:35 -- accel/accel.sh@20 -- # read -r var val 00:06:00.212 03:49:35 -- accel/accel.sh@21 -- # val= 00:06:00.212 03:49:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.212 03:49:35 -- accel/accel.sh@20 -- # IFS=: 00:06:00.212 03:49:35 -- accel/accel.sh@20 -- # read -r var val 00:06:00.212 03:49:35 -- accel/accel.sh@21 -- # val=0x1 00:06:00.212 03:49:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.212 03:49:35 -- accel/accel.sh@20 -- # IFS=: 00:06:00.212 03:49:35 -- accel/accel.sh@20 -- # read -r var val 00:06:00.212 03:49:35 -- accel/accel.sh@21 -- # val= 00:06:00.212 03:49:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.212 03:49:35 -- accel/accel.sh@20 -- # IFS=: 00:06:00.212 03:49:35 -- accel/accel.sh@20 -- # read -r var val 00:06:00.212 03:49:35 -- accel/accel.sh@21 -- # val= 00:06:00.212 03:49:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.212 03:49:35 -- accel/accel.sh@20 -- # IFS=: 00:06:00.212 03:49:35 -- accel/accel.sh@20 -- # read -r var val 00:06:00.212 03:49:35 -- accel/accel.sh@21 -- # val=crc32c 00:06:00.212 03:49:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.212 03:49:35 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:00.212 03:49:35 -- accel/accel.sh@20 -- # IFS=: 00:06:00.212 03:49:35 -- accel/accel.sh@20 -- # read -r var val 00:06:00.212 03:49:35 -- accel/accel.sh@21 -- # val=32 00:06:00.212 03:49:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.212 03:49:35 -- accel/accel.sh@20 -- # IFS=: 00:06:00.212 03:49:35 -- accel/accel.sh@20 -- # read -r var val 00:06:00.212 03:49:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:00.212 03:49:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.212 03:49:35 -- accel/accel.sh@20 -- # IFS=: 00:06:00.212 03:49:35 -- accel/accel.sh@20 -- # read -r var val 00:06:00.212 03:49:35 -- accel/accel.sh@21 -- # val= 00:06:00.212 03:49:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.212 03:49:35 -- accel/accel.sh@20 -- # IFS=: 00:06:00.212 03:49:35 -- accel/accel.sh@20 -- # read -r var val 00:06:00.212 03:49:35 -- accel/accel.sh@21 -- # val=software 00:06:00.212 03:49:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.212 03:49:35 -- accel/accel.sh@23 -- # accel_module=software 00:06:00.212 03:49:35 -- accel/accel.sh@20 -- # IFS=: 00:06:00.212 03:49:35 -- accel/accel.sh@20 -- # read -r var val 00:06:00.212 03:49:35 -- accel/accel.sh@21 -- # val=32 00:06:00.212 03:49:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.212 03:49:35 -- accel/accel.sh@20 -- # IFS=: 00:06:00.212 03:49:35 -- accel/accel.sh@20 -- # read -r var val 00:06:00.212 03:49:35 -- accel/accel.sh@21 -- # val=32 00:06:00.212 03:49:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.212 03:49:35 -- accel/accel.sh@20 -- # IFS=: 00:06:00.212 03:49:35 -- accel/accel.sh@20 -- # read -r var val 00:06:00.212 03:49:35 -- accel/accel.sh@21 -- # val=1 00:06:00.212 03:49:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.212 03:49:35 -- accel/accel.sh@20 -- # IFS=: 00:06:00.212 03:49:35 -- accel/accel.sh@20 -- # read -r var val 00:06:00.212 03:49:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:00.212 03:49:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.212 03:49:35 -- accel/accel.sh@20 -- # IFS=: 00:06:00.212 03:49:35 -- accel/accel.sh@20 -- # read -r var val 00:06:00.212 03:49:35 -- accel/accel.sh@21 -- # val=Yes 00:06:00.212 03:49:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.212 03:49:35 -- accel/accel.sh@20 -- # IFS=: 00:06:00.212 03:49:35 -- accel/accel.sh@20 -- # read -r var val 00:06:00.212 03:49:35 -- accel/accel.sh@21 -- # val= 00:06:00.212 03:49:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.212 03:49:35 -- accel/accel.sh@20 -- # IFS=: 00:06:00.212 03:49:35 -- accel/accel.sh@20 -- # read -r var val 00:06:00.212 03:49:35 -- accel/accel.sh@21 -- # val= 00:06:00.212 03:49:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.212 03:49:35 -- accel/accel.sh@20 -- # IFS=: 00:06:00.212 03:49:35 -- accel/accel.sh@20 -- # read -r var val 00:06:01.591 03:49:36 -- accel/accel.sh@21 -- # val= 00:06:01.591 03:49:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.591 03:49:36 -- accel/accel.sh@20 -- # IFS=: 00:06:01.591 03:49:36 -- accel/accel.sh@20 -- # read -r var val 00:06:01.591 03:49:36 -- accel/accel.sh@21 -- # val= 00:06:01.591 03:49:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.591 03:49:36 -- accel/accel.sh@20 -- # IFS=: 00:06:01.591 03:49:36 -- accel/accel.sh@20 -- # read -r var val 00:06:01.591 03:49:36 -- accel/accel.sh@21 -- # val= 00:06:01.591 03:49:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.591 03:49:36 -- accel/accel.sh@20 -- # IFS=: 00:06:01.591 03:49:36 -- accel/accel.sh@20 -- # read -r var val 00:06:01.591 03:49:36 -- accel/accel.sh@21 -- # val= 00:06:01.591 03:49:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.591 03:49:36 -- accel/accel.sh@20 -- # IFS=: 00:06:01.591 03:49:36 -- accel/accel.sh@20 -- # read -r var val 00:06:01.591 03:49:36 -- accel/accel.sh@21 -- # val= 00:06:01.591 03:49:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.591 03:49:36 -- accel/accel.sh@20 -- # IFS=: 00:06:01.591 03:49:36 -- accel/accel.sh@20 -- # read -r var val 00:06:01.591 03:49:36 -- accel/accel.sh@21 -- # val= 00:06:01.591 03:49:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.591 03:49:36 -- accel/accel.sh@20 -- # IFS=: 00:06:01.591 03:49:36 -- accel/accel.sh@20 -- # read -r var val 00:06:01.591 03:49:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:01.591 03:49:36 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:01.591 03:49:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.591 00:06:01.591 real 0m3.270s 00:06:01.591 user 0m2.774s 00:06:01.591 sys 0m0.293s 00:06:01.591 03:49:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:01.591 ************************************ 00:06:01.591 END TEST accel_crc32c 00:06:01.591 ************************************ 00:06:01.591 03:49:36 -- common/autotest_common.sh@10 -- # set +x 00:06:01.591 03:49:36 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:01.591 03:49:36 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:01.591 03:49:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.591 03:49:36 -- common/autotest_common.sh@10 -- # set +x 00:06:01.591 ************************************ 00:06:01.591 START TEST accel_crc32c_C2 00:06:01.591 ************************************ 00:06:01.591 03:49:36 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:01.591 03:49:36 -- accel/accel.sh@16 -- # local accel_opc 00:06:01.591 03:49:36 -- accel/accel.sh@17 -- # local accel_module 00:06:01.591 03:49:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:01.591 03:49:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:01.591 03:49:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:01.591 03:49:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:01.591 03:49:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.591 03:49:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.591 03:49:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:01.591 03:49:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:01.591 03:49:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:01.591 03:49:36 -- accel/accel.sh@42 -- # jq -r . 00:06:01.591 [2024-11-08 03:49:36.554974] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:01.592 [2024-11-08 03:49:36.555058] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58574 ] 00:06:01.592 [2024-11-08 03:49:36.678570] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.851 [2024-11-08 03:49:36.826283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.228 03:49:38 -- accel/accel.sh@18 -- # out=' 00:06:03.228 SPDK Configuration: 00:06:03.228 Core mask: 0x1 00:06:03.228 00:06:03.228 Accel Perf Configuration: 00:06:03.228 Workload Type: crc32c 00:06:03.228 CRC-32C seed: 0 00:06:03.228 Transfer size: 4096 bytes 00:06:03.228 Vector count 2 00:06:03.228 Module: software 00:06:03.228 Queue depth: 32 00:06:03.228 Allocate depth: 32 00:06:03.228 # threads/core: 1 00:06:03.228 Run time: 1 seconds 00:06:03.228 Verify: Yes 00:06:03.228 00:06:03.228 Running for 1 seconds... 00:06:03.228 00:06:03.228 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:03.228 ------------------------------------------------------------------------------------ 00:06:03.228 0,0 433088/s 3383 MiB/s 0 0 00:06:03.228 ==================================================================================== 00:06:03.228 Total 433088/s 1691 MiB/s 0 0' 00:06:03.228 03:49:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:03.228 03:49:38 -- accel/accel.sh@20 -- # IFS=: 00:06:03.228 03:49:38 -- accel/accel.sh@20 -- # read -r var val 00:06:03.228 03:49:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:03.228 03:49:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:03.228 03:49:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:03.228 03:49:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.228 03:49:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.228 03:49:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:03.228 03:49:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:03.228 03:49:38 -- accel/accel.sh@41 -- # local IFS=, 00:06:03.228 03:49:38 -- accel/accel.sh@42 -- # jq -r . 00:06:03.229 [2024-11-08 03:49:38.206245] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:03.229 [2024-11-08 03:49:38.206359] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58588 ] 00:06:03.492 [2024-11-08 03:49:38.341196] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.492 [2024-11-08 03:49:38.487525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.492 03:49:38 -- accel/accel.sh@21 -- # val= 00:06:03.492 03:49:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.492 03:49:38 -- accel/accel.sh@20 -- # IFS=: 00:06:03.492 03:49:38 -- accel/accel.sh@20 -- # read -r var val 00:06:03.492 03:49:38 -- accel/accel.sh@21 -- # val= 00:06:03.492 03:49:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.492 03:49:38 -- accel/accel.sh@20 -- # IFS=: 00:06:03.492 03:49:38 -- accel/accel.sh@20 -- # read -r var val 00:06:03.492 03:49:38 -- accel/accel.sh@21 -- # val=0x1 00:06:03.492 03:49:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.492 03:49:38 -- accel/accel.sh@20 -- # IFS=: 00:06:03.492 03:49:38 -- accel/accel.sh@20 -- # read -r var val 00:06:03.492 03:49:38 -- accel/accel.sh@21 -- # val= 00:06:03.492 03:49:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.492 03:49:38 -- accel/accel.sh@20 -- # IFS=: 00:06:03.492 03:49:38 -- accel/accel.sh@20 -- # read -r var val 00:06:03.492 03:49:38 -- accel/accel.sh@21 -- # val= 00:06:03.492 03:49:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.492 03:49:38 -- accel/accel.sh@20 -- # IFS=: 00:06:03.492 03:49:38 -- accel/accel.sh@20 -- # read -r var val 00:06:03.492 03:49:38 -- accel/accel.sh@21 -- # val=crc32c 00:06:03.492 03:49:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.492 03:49:38 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:03.492 03:49:38 -- accel/accel.sh@20 -- # IFS=: 00:06:03.492 03:49:38 -- accel/accel.sh@20 -- # read -r var val 00:06:03.492 03:49:38 -- accel/accel.sh@21 -- # val=0 00:06:03.492 03:49:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.492 03:49:38 -- accel/accel.sh@20 -- # IFS=: 00:06:03.492 03:49:38 -- accel/accel.sh@20 -- # read -r var val 00:06:03.492 03:49:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:03.492 03:49:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.492 03:49:38 -- accel/accel.sh@20 -- # IFS=: 00:06:03.492 03:49:38 -- accel/accel.sh@20 -- # read -r var val 00:06:03.492 03:49:38 -- accel/accel.sh@21 -- # val= 00:06:03.492 03:49:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.492 03:49:38 -- accel/accel.sh@20 -- # IFS=: 00:06:03.492 03:49:38 -- accel/accel.sh@20 -- # read -r var val 00:06:03.492 03:49:38 -- accel/accel.sh@21 -- # val=software 00:06:03.492 03:49:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.492 03:49:38 -- accel/accel.sh@23 -- # accel_module=software 00:06:03.492 03:49:38 -- accel/accel.sh@20 -- # IFS=: 00:06:03.492 03:49:38 -- accel/accel.sh@20 -- # read -r var val 00:06:03.492 03:49:38 -- accel/accel.sh@21 -- # val=32 00:06:03.492 03:49:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.492 03:49:38 -- accel/accel.sh@20 -- # IFS=: 00:06:03.492 03:49:38 -- accel/accel.sh@20 -- # read -r var val 00:06:03.492 03:49:38 -- accel/accel.sh@21 -- # val=32 00:06:03.492 03:49:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.492 03:49:38 -- accel/accel.sh@20 -- # IFS=: 00:06:03.492 03:49:38 -- accel/accel.sh@20 -- # read -r var val 00:06:03.492 03:49:38 -- accel/accel.sh@21 -- # val=1 00:06:03.492 03:49:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.492 03:49:38 -- accel/accel.sh@20 -- # IFS=: 00:06:03.492 03:49:38 -- accel/accel.sh@20 -- # read -r var val 00:06:03.492 03:49:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:03.492 03:49:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.492 03:49:38 -- accel/accel.sh@20 -- # IFS=: 00:06:03.492 03:49:38 -- accel/accel.sh@20 -- # read -r var val 00:06:03.492 03:49:38 -- accel/accel.sh@21 -- # val=Yes 00:06:03.492 03:49:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.492 03:49:38 -- accel/accel.sh@20 -- # IFS=: 00:06:03.492 03:49:38 -- accel/accel.sh@20 -- # read -r var val 00:06:03.492 03:49:38 -- accel/accel.sh@21 -- # val= 00:06:03.492 03:49:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.492 03:49:38 -- accel/accel.sh@20 -- # IFS=: 00:06:03.492 03:49:38 -- accel/accel.sh@20 -- # read -r var val 00:06:03.492 03:49:38 -- accel/accel.sh@21 -- # val= 00:06:03.492 03:49:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.492 03:49:38 -- accel/accel.sh@20 -- # IFS=: 00:06:03.492 03:49:38 -- accel/accel.sh@20 -- # read -r var val 00:06:04.905 03:49:39 -- accel/accel.sh@21 -- # val= 00:06:04.905 03:49:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.905 03:49:39 -- accel/accel.sh@20 -- # IFS=: 00:06:04.905 03:49:39 -- accel/accel.sh@20 -- # read -r var val 00:06:04.905 03:49:39 -- accel/accel.sh@21 -- # val= 00:06:04.905 03:49:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.905 03:49:39 -- accel/accel.sh@20 -- # IFS=: 00:06:04.905 03:49:39 -- accel/accel.sh@20 -- # read -r var val 00:06:04.905 03:49:39 -- accel/accel.sh@21 -- # val= 00:06:04.905 03:49:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.905 03:49:39 -- accel/accel.sh@20 -- # IFS=: 00:06:04.905 03:49:39 -- accel/accel.sh@20 -- # read -r var val 00:06:04.905 03:49:39 -- accel/accel.sh@21 -- # val= 00:06:04.905 03:49:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.905 03:49:39 -- accel/accel.sh@20 -- # IFS=: 00:06:04.905 03:49:39 -- accel/accel.sh@20 -- # read -r var val 00:06:04.905 03:49:39 -- accel/accel.sh@21 -- # val= 00:06:04.905 03:49:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.905 03:49:39 -- accel/accel.sh@20 -- # IFS=: 00:06:04.905 03:49:39 -- accel/accel.sh@20 -- # read -r var val 00:06:04.905 03:49:39 -- accel/accel.sh@21 -- # val= 00:06:04.905 03:49:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.905 03:49:39 -- accel/accel.sh@20 -- # IFS=: 00:06:04.905 03:49:39 -- accel/accel.sh@20 -- # read -r var val 00:06:04.905 03:49:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:04.905 03:49:39 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:04.905 03:49:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.905 00:06:04.905 real 0m3.272s 00:06:04.905 user 0m2.764s 00:06:04.905 sys 0m0.302s 00:06:04.905 03:49:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:04.905 03:49:39 -- common/autotest_common.sh@10 -- # set +x 00:06:04.905 ************************************ 00:06:04.905 END TEST accel_crc32c_C2 00:06:04.905 ************************************ 00:06:04.905 03:49:39 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:04.905 03:49:39 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:04.905 03:49:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.905 03:49:39 -- common/autotest_common.sh@10 -- # set +x 00:06:04.905 ************************************ 00:06:04.905 START TEST accel_copy 00:06:04.905 ************************************ 00:06:04.905 03:49:39 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:04.905 03:49:39 -- accel/accel.sh@16 -- # local accel_opc 00:06:04.905 03:49:39 -- accel/accel.sh@17 -- # local accel_module 00:06:04.905 03:49:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:04.905 03:49:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:04.905 03:49:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.905 03:49:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:04.905 03:49:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.905 03:49:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.905 03:49:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:04.905 03:49:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:04.905 03:49:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:04.905 03:49:39 -- accel/accel.sh@42 -- # jq -r . 00:06:04.905 [2024-11-08 03:49:39.888686] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:04.905 [2024-11-08 03:49:39.888818] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58628 ] 00:06:05.163 [2024-11-08 03:49:40.028795] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.163 [2024-11-08 03:49:40.125693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.541 03:49:41 -- accel/accel.sh@18 -- # out=' 00:06:06.541 SPDK Configuration: 00:06:06.541 Core mask: 0x1 00:06:06.541 00:06:06.541 Accel Perf Configuration: 00:06:06.541 Workload Type: copy 00:06:06.541 Transfer size: 4096 bytes 00:06:06.541 Vector count 1 00:06:06.541 Module: software 00:06:06.541 Queue depth: 32 00:06:06.541 Allocate depth: 32 00:06:06.541 # threads/core: 1 00:06:06.541 Run time: 1 seconds 00:06:06.541 Verify: Yes 00:06:06.541 00:06:06.541 Running for 1 seconds... 00:06:06.541 00:06:06.541 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:06.541 ------------------------------------------------------------------------------------ 00:06:06.541 0,0 396160/s 1547 MiB/s 0 0 00:06:06.541 ==================================================================================== 00:06:06.541 Total 396160/s 1547 MiB/s 0 0' 00:06:06.541 03:49:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:06.541 03:49:41 -- accel/accel.sh@20 -- # IFS=: 00:06:06.541 03:49:41 -- accel/accel.sh@20 -- # read -r var val 00:06:06.541 03:49:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:06.541 03:49:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.541 03:49:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.541 03:49:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.541 03:49:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.541 03:49:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.541 03:49:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.541 03:49:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.541 03:49:41 -- accel/accel.sh@42 -- # jq -r . 00:06:06.541 [2024-11-08 03:49:41.472175] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:06.541 [2024-11-08 03:49:41.472353] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58648 ] 00:06:06.541 [2024-11-08 03:49:41.611632] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.800 [2024-11-08 03:49:41.685289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.800 03:49:41 -- accel/accel.sh@21 -- # val= 00:06:06.800 03:49:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.800 03:49:41 -- accel/accel.sh@20 -- # IFS=: 00:06:06.800 03:49:41 -- accel/accel.sh@20 -- # read -r var val 00:06:06.800 03:49:41 -- accel/accel.sh@21 -- # val= 00:06:06.800 03:49:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.800 03:49:41 -- accel/accel.sh@20 -- # IFS=: 00:06:06.800 03:49:41 -- accel/accel.sh@20 -- # read -r var val 00:06:06.800 03:49:41 -- accel/accel.sh@21 -- # val=0x1 00:06:06.800 03:49:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.800 03:49:41 -- accel/accel.sh@20 -- # IFS=: 00:06:06.800 03:49:41 -- accel/accel.sh@20 -- # read -r var val 00:06:06.800 03:49:41 -- accel/accel.sh@21 -- # val= 00:06:06.800 03:49:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.800 03:49:41 -- accel/accel.sh@20 -- # IFS=: 00:06:06.800 03:49:41 -- accel/accel.sh@20 -- # read -r var val 00:06:06.800 03:49:41 -- accel/accel.sh@21 -- # val= 00:06:06.800 03:49:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.800 03:49:41 -- accel/accel.sh@20 -- # IFS=: 00:06:06.800 03:49:41 -- accel/accel.sh@20 -- # read -r var val 00:06:06.800 03:49:41 -- accel/accel.sh@21 -- # val=copy 00:06:06.800 03:49:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.800 03:49:41 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:06.800 03:49:41 -- accel/accel.sh@20 -- # IFS=: 00:06:06.800 03:49:41 -- accel/accel.sh@20 -- # read -r var val 00:06:06.800 03:49:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:06.800 03:49:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.800 03:49:41 -- accel/accel.sh@20 -- # IFS=: 00:06:06.800 03:49:41 -- accel/accel.sh@20 -- # read -r var val 00:06:06.800 03:49:41 -- accel/accel.sh@21 -- # val= 00:06:06.800 03:49:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.800 03:49:41 -- accel/accel.sh@20 -- # IFS=: 00:06:06.800 03:49:41 -- accel/accel.sh@20 -- # read -r var val 00:06:06.800 03:49:41 -- accel/accel.sh@21 -- # val=software 00:06:06.801 03:49:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.801 03:49:41 -- accel/accel.sh@23 -- # accel_module=software 00:06:06.801 03:49:41 -- accel/accel.sh@20 -- # IFS=: 00:06:06.801 03:49:41 -- accel/accel.sh@20 -- # read -r var val 00:06:06.801 03:49:41 -- accel/accel.sh@21 -- # val=32 00:06:06.801 03:49:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.801 03:49:41 -- accel/accel.sh@20 -- # IFS=: 00:06:06.801 03:49:41 -- accel/accel.sh@20 -- # read -r var val 00:06:06.801 03:49:41 -- accel/accel.sh@21 -- # val=32 00:06:06.801 03:49:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.801 03:49:41 -- accel/accel.sh@20 -- # IFS=: 00:06:06.801 03:49:41 -- accel/accel.sh@20 -- # read -r var val 00:06:06.801 03:49:41 -- accel/accel.sh@21 -- # val=1 00:06:06.801 03:49:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.801 03:49:41 -- accel/accel.sh@20 -- # IFS=: 00:06:06.801 03:49:41 -- accel/accel.sh@20 -- # read -r var val 00:06:06.801 03:49:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:06.801 03:49:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.801 03:49:41 -- accel/accel.sh@20 -- # IFS=: 00:06:06.801 03:49:41 -- accel/accel.sh@20 -- # read -r var val 00:06:06.801 03:49:41 -- accel/accel.sh@21 -- # val=Yes 00:06:06.801 03:49:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.801 03:49:41 -- accel/accel.sh@20 -- # IFS=: 00:06:06.801 03:49:41 -- accel/accel.sh@20 -- # read -r var val 00:06:06.801 03:49:41 -- accel/accel.sh@21 -- # val= 00:06:06.801 03:49:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.801 03:49:41 -- accel/accel.sh@20 -- # IFS=: 00:06:06.801 03:49:41 -- accel/accel.sh@20 -- # read -r var val 00:06:06.801 03:49:41 -- accel/accel.sh@21 -- # val= 00:06:06.801 03:49:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.801 03:49:41 -- accel/accel.sh@20 -- # IFS=: 00:06:06.801 03:49:41 -- accel/accel.sh@20 -- # read -r var val 00:06:08.178 03:49:43 -- accel/accel.sh@21 -- # val= 00:06:08.178 03:49:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.178 03:49:43 -- accel/accel.sh@20 -- # IFS=: 00:06:08.178 03:49:43 -- accel/accel.sh@20 -- # read -r var val 00:06:08.178 03:49:43 -- accel/accel.sh@21 -- # val= 00:06:08.178 03:49:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.178 03:49:43 -- accel/accel.sh@20 -- # IFS=: 00:06:08.178 03:49:43 -- accel/accel.sh@20 -- # read -r var val 00:06:08.178 03:49:43 -- accel/accel.sh@21 -- # val= 00:06:08.178 03:49:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.178 03:49:43 -- accel/accel.sh@20 -- # IFS=: 00:06:08.178 03:49:43 -- accel/accel.sh@20 -- # read -r var val 00:06:08.178 03:49:43 -- accel/accel.sh@21 -- # val= 00:06:08.178 03:49:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.178 03:49:43 -- accel/accel.sh@20 -- # IFS=: 00:06:08.178 03:49:43 -- accel/accel.sh@20 -- # read -r var val 00:06:08.178 03:49:43 -- accel/accel.sh@21 -- # val= 00:06:08.178 03:49:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.178 03:49:43 -- accel/accel.sh@20 -- # IFS=: 00:06:08.178 03:49:43 -- accel/accel.sh@20 -- # read -r var val 00:06:08.178 03:49:43 -- accel/accel.sh@21 -- # val= 00:06:08.178 03:49:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.178 03:49:43 -- accel/accel.sh@20 -- # IFS=: 00:06:08.178 03:49:43 -- accel/accel.sh@20 -- # read -r var val 00:06:08.178 03:49:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:08.178 03:49:43 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:08.178 03:49:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.178 00:06:08.178 real 0m3.157s 00:06:08.178 user 0m2.662s 00:06:08.178 sys 0m0.289s 00:06:08.178 03:49:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.178 03:49:43 -- common/autotest_common.sh@10 -- # set +x 00:06:08.178 ************************************ 00:06:08.178 END TEST accel_copy 00:06:08.178 ************************************ 00:06:08.178 03:49:43 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:08.178 03:49:43 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:08.178 03:49:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.178 03:49:43 -- common/autotest_common.sh@10 -- # set +x 00:06:08.178 ************************************ 00:06:08.178 START TEST accel_fill 00:06:08.178 ************************************ 00:06:08.178 03:49:43 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:08.178 03:49:43 -- accel/accel.sh@16 -- # local accel_opc 00:06:08.178 03:49:43 -- accel/accel.sh@17 -- # local accel_module 00:06:08.178 03:49:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:08.178 03:49:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:08.178 03:49:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:08.178 03:49:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:08.178 03:49:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.178 03:49:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.178 03:49:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:08.178 03:49:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:08.178 03:49:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:08.178 03:49:43 -- accel/accel.sh@42 -- # jq -r . 00:06:08.178 [2024-11-08 03:49:43.103587] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:08.178 [2024-11-08 03:49:43.103749] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58682 ] 00:06:08.178 [2024-11-08 03:49:43.233491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.437 [2024-11-08 03:49:43.306998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.815 03:49:44 -- accel/accel.sh@18 -- # out=' 00:06:09.815 SPDK Configuration: 00:06:09.815 Core mask: 0x1 00:06:09.815 00:06:09.815 Accel Perf Configuration: 00:06:09.815 Workload Type: fill 00:06:09.815 Fill pattern: 0x80 00:06:09.815 Transfer size: 4096 bytes 00:06:09.815 Vector count 1 00:06:09.815 Module: software 00:06:09.815 Queue depth: 64 00:06:09.815 Allocate depth: 64 00:06:09.815 # threads/core: 1 00:06:09.815 Run time: 1 seconds 00:06:09.815 Verify: Yes 00:06:09.815 00:06:09.815 Running for 1 seconds... 00:06:09.815 00:06:09.815 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:09.815 ------------------------------------------------------------------------------------ 00:06:09.815 0,0 575232/s 2247 MiB/s 0 0 00:06:09.815 ==================================================================================== 00:06:09.815 Total 575232/s 2247 MiB/s 0 0' 00:06:09.815 03:49:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:09.815 03:49:44 -- accel/accel.sh@20 -- # IFS=: 00:06:09.815 03:49:44 -- accel/accel.sh@20 -- # read -r var val 00:06:09.815 03:49:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:09.815 03:49:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.815 03:49:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:09.815 03:49:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.815 03:49:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.815 03:49:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:09.815 03:49:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:09.815 03:49:44 -- accel/accel.sh@41 -- # local IFS=, 00:06:09.815 03:49:44 -- accel/accel.sh@42 -- # jq -r . 00:06:09.815 [2024-11-08 03:49:44.648065] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:09.815 [2024-11-08 03:49:44.648456] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58702 ] 00:06:09.815 [2024-11-08 03:49:44.778321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.815 [2024-11-08 03:49:44.856883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.074 03:49:44 -- accel/accel.sh@21 -- # val= 00:06:10.074 03:49:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.074 03:49:44 -- accel/accel.sh@20 -- # IFS=: 00:06:10.074 03:49:44 -- accel/accel.sh@20 -- # read -r var val 00:06:10.074 03:49:44 -- accel/accel.sh@21 -- # val= 00:06:10.074 03:49:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.075 03:49:44 -- accel/accel.sh@20 -- # IFS=: 00:06:10.075 03:49:44 -- accel/accel.sh@20 -- # read -r var val 00:06:10.075 03:49:44 -- accel/accel.sh@21 -- # val=0x1 00:06:10.075 03:49:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.075 03:49:44 -- accel/accel.sh@20 -- # IFS=: 00:06:10.075 03:49:44 -- accel/accel.sh@20 -- # read -r var val 00:06:10.075 03:49:44 -- accel/accel.sh@21 -- # val= 00:06:10.075 03:49:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.075 03:49:44 -- accel/accel.sh@20 -- # IFS=: 00:06:10.075 03:49:44 -- accel/accel.sh@20 -- # read -r var val 00:06:10.075 03:49:44 -- accel/accel.sh@21 -- # val= 00:06:10.075 03:49:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.075 03:49:44 -- accel/accel.sh@20 -- # IFS=: 00:06:10.075 03:49:44 -- accel/accel.sh@20 -- # read -r var val 00:06:10.075 03:49:44 -- accel/accel.sh@21 -- # val=fill 00:06:10.075 03:49:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.075 03:49:44 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:10.075 03:49:44 -- accel/accel.sh@20 -- # IFS=: 00:06:10.075 03:49:44 -- accel/accel.sh@20 -- # read -r var val 00:06:10.075 03:49:44 -- accel/accel.sh@21 -- # val=0x80 00:06:10.075 03:49:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.075 03:49:44 -- accel/accel.sh@20 -- # IFS=: 00:06:10.075 03:49:44 -- accel/accel.sh@20 -- # read -r var val 00:06:10.075 03:49:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:10.075 03:49:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.075 03:49:44 -- accel/accel.sh@20 -- # IFS=: 00:06:10.075 03:49:44 -- accel/accel.sh@20 -- # read -r var val 00:06:10.075 03:49:44 -- accel/accel.sh@21 -- # val= 00:06:10.075 03:49:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.075 03:49:44 -- accel/accel.sh@20 -- # IFS=: 00:06:10.075 03:49:44 -- accel/accel.sh@20 -- # read -r var val 00:06:10.075 03:49:44 -- accel/accel.sh@21 -- # val=software 00:06:10.075 03:49:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.075 03:49:44 -- accel/accel.sh@23 -- # accel_module=software 00:06:10.075 03:49:44 -- accel/accel.sh@20 -- # IFS=: 00:06:10.075 03:49:44 -- accel/accel.sh@20 -- # read -r var val 00:06:10.075 03:49:44 -- accel/accel.sh@21 -- # val=64 00:06:10.075 03:49:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.075 03:49:44 -- accel/accel.sh@20 -- # IFS=: 00:06:10.075 03:49:44 -- accel/accel.sh@20 -- # read -r var val 00:06:10.075 03:49:44 -- accel/accel.sh@21 -- # val=64 00:06:10.075 03:49:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.075 03:49:44 -- accel/accel.sh@20 -- # IFS=: 00:06:10.075 03:49:44 -- accel/accel.sh@20 -- # read -r var val 00:06:10.075 03:49:44 -- accel/accel.sh@21 -- # val=1 00:06:10.075 03:49:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.075 03:49:44 -- accel/accel.sh@20 -- # IFS=: 00:06:10.075 03:49:44 -- accel/accel.sh@20 -- # read -r var val 00:06:10.075 03:49:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:10.075 03:49:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.075 03:49:44 -- accel/accel.sh@20 -- # IFS=: 00:06:10.075 03:49:44 -- accel/accel.sh@20 -- # read -r var val 00:06:10.075 03:49:44 -- accel/accel.sh@21 -- # val=Yes 00:06:10.075 03:49:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.075 03:49:44 -- accel/accel.sh@20 -- # IFS=: 00:06:10.075 03:49:44 -- accel/accel.sh@20 -- # read -r var val 00:06:10.075 03:49:44 -- accel/accel.sh@21 -- # val= 00:06:10.075 03:49:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.075 03:49:44 -- accel/accel.sh@20 -- # IFS=: 00:06:10.075 03:49:44 -- accel/accel.sh@20 -- # read -r var val 00:06:10.075 03:49:44 -- accel/accel.sh@21 -- # val= 00:06:10.075 03:49:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.075 03:49:44 -- accel/accel.sh@20 -- # IFS=: 00:06:10.075 03:49:44 -- accel/accel.sh@20 -- # read -r var val 00:06:11.452 03:49:46 -- accel/accel.sh@21 -- # val= 00:06:11.452 03:49:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.452 03:49:46 -- accel/accel.sh@20 -- # IFS=: 00:06:11.453 03:49:46 -- accel/accel.sh@20 -- # read -r var val 00:06:11.453 03:49:46 -- accel/accel.sh@21 -- # val= 00:06:11.453 03:49:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.453 03:49:46 -- accel/accel.sh@20 -- # IFS=: 00:06:11.453 03:49:46 -- accel/accel.sh@20 -- # read -r var val 00:06:11.453 03:49:46 -- accel/accel.sh@21 -- # val= 00:06:11.453 03:49:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.453 03:49:46 -- accel/accel.sh@20 -- # IFS=: 00:06:11.453 03:49:46 -- accel/accel.sh@20 -- # read -r var val 00:06:11.453 03:49:46 -- accel/accel.sh@21 -- # val= 00:06:11.453 03:49:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.453 03:49:46 -- accel/accel.sh@20 -- # IFS=: 00:06:11.453 03:49:46 -- accel/accel.sh@20 -- # read -r var val 00:06:11.453 ************************************ 00:06:11.453 END TEST accel_fill 00:06:11.453 ************************************ 00:06:11.453 03:49:46 -- accel/accel.sh@21 -- # val= 00:06:11.453 03:49:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.453 03:49:46 -- accel/accel.sh@20 -- # IFS=: 00:06:11.453 03:49:46 -- accel/accel.sh@20 -- # read -r var val 00:06:11.453 03:49:46 -- accel/accel.sh@21 -- # val= 00:06:11.453 03:49:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.453 03:49:46 -- accel/accel.sh@20 -- # IFS=: 00:06:11.453 03:49:46 -- accel/accel.sh@20 -- # read -r var val 00:06:11.453 03:49:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:11.453 03:49:46 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:11.453 03:49:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.453 00:06:11.453 real 0m3.094s 00:06:11.453 user 0m2.617s 00:06:11.453 sys 0m0.271s 00:06:11.453 03:49:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:11.453 03:49:46 -- common/autotest_common.sh@10 -- # set +x 00:06:11.453 03:49:46 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:11.453 03:49:46 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:11.453 03:49:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.453 03:49:46 -- common/autotest_common.sh@10 -- # set +x 00:06:11.453 ************************************ 00:06:11.453 START TEST accel_copy_crc32c 00:06:11.453 ************************************ 00:06:11.453 03:49:46 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:06:11.453 03:49:46 -- accel/accel.sh@16 -- # local accel_opc 00:06:11.453 03:49:46 -- accel/accel.sh@17 -- # local accel_module 00:06:11.453 03:49:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:11.453 03:49:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:11.453 03:49:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:11.453 03:49:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:11.453 03:49:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.453 03:49:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.453 03:49:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:11.453 03:49:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:11.453 03:49:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:11.453 03:49:46 -- accel/accel.sh@42 -- # jq -r . 00:06:11.453 [2024-11-08 03:49:46.256850] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:11.453 [2024-11-08 03:49:46.256985] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58736 ] 00:06:11.453 [2024-11-08 03:49:46.394665] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.453 [2024-11-08 03:49:46.468848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.830 03:49:47 -- accel/accel.sh@18 -- # out=' 00:06:12.830 SPDK Configuration: 00:06:12.830 Core mask: 0x1 00:06:12.830 00:06:12.830 Accel Perf Configuration: 00:06:12.830 Workload Type: copy_crc32c 00:06:12.830 CRC-32C seed: 0 00:06:12.830 Vector size: 4096 bytes 00:06:12.830 Transfer size: 4096 bytes 00:06:12.830 Vector count 1 00:06:12.830 Module: software 00:06:12.830 Queue depth: 32 00:06:12.830 Allocate depth: 32 00:06:12.830 # threads/core: 1 00:06:12.830 Run time: 1 seconds 00:06:12.830 Verify: Yes 00:06:12.830 00:06:12.830 Running for 1 seconds... 00:06:12.830 00:06:12.830 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:12.830 ------------------------------------------------------------------------------------ 00:06:12.830 0,0 313760/s 1225 MiB/s 0 0 00:06:12.830 ==================================================================================== 00:06:12.830 Total 313760/s 1225 MiB/s 0 0' 00:06:12.830 03:49:47 -- accel/accel.sh@20 -- # IFS=: 00:06:12.830 03:49:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:12.830 03:49:47 -- accel/accel.sh@20 -- # read -r var val 00:06:12.830 03:49:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:12.830 03:49:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.830 03:49:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.830 03:49:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.830 03:49:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.830 03:49:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.830 03:49:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.830 03:49:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.830 03:49:47 -- accel/accel.sh@42 -- # jq -r . 00:06:12.830 [2024-11-08 03:49:47.803730] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:12.830 [2024-11-08 03:49:47.803901] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58756 ] 00:06:13.089 [2024-11-08 03:49:47.946054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.089 [2024-11-08 03:49:48.019126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.089 03:49:48 -- accel/accel.sh@21 -- # val= 00:06:13.089 03:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.089 03:49:48 -- accel/accel.sh@20 -- # IFS=: 00:06:13.089 03:49:48 -- accel/accel.sh@20 -- # read -r var val 00:06:13.089 03:49:48 -- accel/accel.sh@21 -- # val= 00:06:13.089 03:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.089 03:49:48 -- accel/accel.sh@20 -- # IFS=: 00:06:13.089 03:49:48 -- accel/accel.sh@20 -- # read -r var val 00:06:13.089 03:49:48 -- accel/accel.sh@21 -- # val=0x1 00:06:13.089 03:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.089 03:49:48 -- accel/accel.sh@20 -- # IFS=: 00:06:13.089 03:49:48 -- accel/accel.sh@20 -- # read -r var val 00:06:13.089 03:49:48 -- accel/accel.sh@21 -- # val= 00:06:13.089 03:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.089 03:49:48 -- accel/accel.sh@20 -- # IFS=: 00:06:13.089 03:49:48 -- accel/accel.sh@20 -- # read -r var val 00:06:13.089 03:49:48 -- accel/accel.sh@21 -- # val= 00:06:13.089 03:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.089 03:49:48 -- accel/accel.sh@20 -- # IFS=: 00:06:13.089 03:49:48 -- accel/accel.sh@20 -- # read -r var val 00:06:13.089 03:49:48 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:13.089 03:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.089 03:49:48 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:13.089 03:49:48 -- accel/accel.sh@20 -- # IFS=: 00:06:13.089 03:49:48 -- accel/accel.sh@20 -- # read -r var val 00:06:13.089 03:49:48 -- accel/accel.sh@21 -- # val=0 00:06:13.089 03:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.089 03:49:48 -- accel/accel.sh@20 -- # IFS=: 00:06:13.090 03:49:48 -- accel/accel.sh@20 -- # read -r var val 00:06:13.090 03:49:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:13.090 03:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.090 03:49:48 -- accel/accel.sh@20 -- # IFS=: 00:06:13.090 03:49:48 -- accel/accel.sh@20 -- # read -r var val 00:06:13.090 03:49:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:13.090 03:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.090 03:49:48 -- accel/accel.sh@20 -- # IFS=: 00:06:13.090 03:49:48 -- accel/accel.sh@20 -- # read -r var val 00:06:13.090 03:49:48 -- accel/accel.sh@21 -- # val= 00:06:13.090 03:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.090 03:49:48 -- accel/accel.sh@20 -- # IFS=: 00:06:13.090 03:49:48 -- accel/accel.sh@20 -- # read -r var val 00:06:13.090 03:49:48 -- accel/accel.sh@21 -- # val=software 00:06:13.090 03:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.090 03:49:48 -- accel/accel.sh@23 -- # accel_module=software 00:06:13.090 03:49:48 -- accel/accel.sh@20 -- # IFS=: 00:06:13.090 03:49:48 -- accel/accel.sh@20 -- # read -r var val 00:06:13.090 03:49:48 -- accel/accel.sh@21 -- # val=32 00:06:13.090 03:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.090 03:49:48 -- accel/accel.sh@20 -- # IFS=: 00:06:13.090 03:49:48 -- accel/accel.sh@20 -- # read -r var val 00:06:13.090 03:49:48 -- accel/accel.sh@21 -- # val=32 00:06:13.090 03:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.090 03:49:48 -- accel/accel.sh@20 -- # IFS=: 00:06:13.090 03:49:48 -- accel/accel.sh@20 -- # read -r var val 00:06:13.090 03:49:48 -- accel/accel.sh@21 -- # val=1 00:06:13.090 03:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.090 03:49:48 -- accel/accel.sh@20 -- # IFS=: 00:06:13.090 03:49:48 -- accel/accel.sh@20 -- # read -r var val 00:06:13.090 03:49:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:13.090 03:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.090 03:49:48 -- accel/accel.sh@20 -- # IFS=: 00:06:13.090 03:49:48 -- accel/accel.sh@20 -- # read -r var val 00:06:13.090 03:49:48 -- accel/accel.sh@21 -- # val=Yes 00:06:13.090 03:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.090 03:49:48 -- accel/accel.sh@20 -- # IFS=: 00:06:13.090 03:49:48 -- accel/accel.sh@20 -- # read -r var val 00:06:13.090 03:49:48 -- accel/accel.sh@21 -- # val= 00:06:13.090 03:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.090 03:49:48 -- accel/accel.sh@20 -- # IFS=: 00:06:13.090 03:49:48 -- accel/accel.sh@20 -- # read -r var val 00:06:13.090 03:49:48 -- accel/accel.sh@21 -- # val= 00:06:13.090 03:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.090 03:49:48 -- accel/accel.sh@20 -- # IFS=: 00:06:13.090 03:49:48 -- accel/accel.sh@20 -- # read -r var val 00:06:14.499 03:49:49 -- accel/accel.sh@21 -- # val= 00:06:14.499 03:49:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.499 03:49:49 -- accel/accel.sh@20 -- # IFS=: 00:06:14.499 03:49:49 -- accel/accel.sh@20 -- # read -r var val 00:06:14.499 03:49:49 -- accel/accel.sh@21 -- # val= 00:06:14.499 03:49:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.499 03:49:49 -- accel/accel.sh@20 -- # IFS=: 00:06:14.499 03:49:49 -- accel/accel.sh@20 -- # read -r var val 00:06:14.499 03:49:49 -- accel/accel.sh@21 -- # val= 00:06:14.499 03:49:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.499 03:49:49 -- accel/accel.sh@20 -- # IFS=: 00:06:14.499 03:49:49 -- accel/accel.sh@20 -- # read -r var val 00:06:14.499 03:49:49 -- accel/accel.sh@21 -- # val= 00:06:14.499 03:49:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.499 03:49:49 -- accel/accel.sh@20 -- # IFS=: 00:06:14.499 03:49:49 -- accel/accel.sh@20 -- # read -r var val 00:06:14.499 03:49:49 -- accel/accel.sh@21 -- # val= 00:06:14.499 03:49:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.499 03:49:49 -- accel/accel.sh@20 -- # IFS=: 00:06:14.499 03:49:49 -- accel/accel.sh@20 -- # read -r var val 00:06:14.499 03:49:49 -- accel/accel.sh@21 -- # val= 00:06:14.499 03:49:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.499 03:49:49 -- accel/accel.sh@20 -- # IFS=: 00:06:14.499 03:49:49 -- accel/accel.sh@20 -- # read -r var val 00:06:14.499 03:49:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:14.499 03:49:49 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:14.499 03:49:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.499 00:06:14.499 real 0m3.105s 00:06:14.499 user 0m2.624s 00:06:14.499 sys 0m0.270s 00:06:14.499 03:49:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:14.499 03:49:49 -- common/autotest_common.sh@10 -- # set +x 00:06:14.499 ************************************ 00:06:14.499 END TEST accel_copy_crc32c 00:06:14.499 ************************************ 00:06:14.499 03:49:49 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:14.499 03:49:49 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:14.499 03:49:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.499 03:49:49 -- common/autotest_common.sh@10 -- # set +x 00:06:14.499 ************************************ 00:06:14.499 START TEST accel_copy_crc32c_C2 00:06:14.499 ************************************ 00:06:14.499 03:49:49 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:14.499 03:49:49 -- accel/accel.sh@16 -- # local accel_opc 00:06:14.499 03:49:49 -- accel/accel.sh@17 -- # local accel_module 00:06:14.499 03:49:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:14.499 03:49:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:14.499 03:49:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.499 03:49:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.499 03:49:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.499 03:49:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.499 03:49:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.499 03:49:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.499 03:49:49 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.499 03:49:49 -- accel/accel.sh@42 -- # jq -r . 00:06:14.499 [2024-11-08 03:49:49.423315] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:14.499 [2024-11-08 03:49:49.424046] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58790 ] 00:06:14.499 [2024-11-08 03:49:49.557118] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.757 [2024-11-08 03:49:49.627356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.133 03:49:50 -- accel/accel.sh@18 -- # out=' 00:06:16.134 SPDK Configuration: 00:06:16.134 Core mask: 0x1 00:06:16.134 00:06:16.134 Accel Perf Configuration: 00:06:16.134 Workload Type: copy_crc32c 00:06:16.134 CRC-32C seed: 0 00:06:16.134 Vector size: 4096 bytes 00:06:16.134 Transfer size: 8192 bytes 00:06:16.134 Vector count 2 00:06:16.134 Module: software 00:06:16.134 Queue depth: 32 00:06:16.134 Allocate depth: 32 00:06:16.134 # threads/core: 1 00:06:16.134 Run time: 1 seconds 00:06:16.134 Verify: Yes 00:06:16.134 00:06:16.134 Running for 1 seconds... 00:06:16.134 00:06:16.134 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:16.134 ------------------------------------------------------------------------------------ 00:06:16.134 0,0 221600/s 1731 MiB/s 0 0 00:06:16.134 ==================================================================================== 00:06:16.134 Total 221600/s 865 MiB/s 0 0' 00:06:16.134 03:49:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:16.134 03:49:50 -- accel/accel.sh@20 -- # IFS=: 00:06:16.134 03:49:50 -- accel/accel.sh@20 -- # read -r var val 00:06:16.134 03:49:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:16.134 03:49:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.134 03:49:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:16.134 03:49:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.134 03:49:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.134 03:49:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:16.134 03:49:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:16.134 03:49:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:16.134 03:49:50 -- accel/accel.sh@42 -- # jq -r . 00:06:16.134 [2024-11-08 03:49:50.935745] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:16.134 [2024-11-08 03:49:50.935820] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58810 ] 00:06:16.134 [2024-11-08 03:49:51.063358] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.134 [2024-11-08 03:49:51.137825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.134 03:49:51 -- accel/accel.sh@21 -- # val= 00:06:16.134 03:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.134 03:49:51 -- accel/accel.sh@20 -- # IFS=: 00:06:16.134 03:49:51 -- accel/accel.sh@20 -- # read -r var val 00:06:16.134 03:49:51 -- accel/accel.sh@21 -- # val= 00:06:16.134 03:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.134 03:49:51 -- accel/accel.sh@20 -- # IFS=: 00:06:16.134 03:49:51 -- accel/accel.sh@20 -- # read -r var val 00:06:16.134 03:49:51 -- accel/accel.sh@21 -- # val=0x1 00:06:16.134 03:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.134 03:49:51 -- accel/accel.sh@20 -- # IFS=: 00:06:16.134 03:49:51 -- accel/accel.sh@20 -- # read -r var val 00:06:16.134 03:49:51 -- accel/accel.sh@21 -- # val= 00:06:16.134 03:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.134 03:49:51 -- accel/accel.sh@20 -- # IFS=: 00:06:16.134 03:49:51 -- accel/accel.sh@20 -- # read -r var val 00:06:16.134 03:49:51 -- accel/accel.sh@21 -- # val= 00:06:16.134 03:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.134 03:49:51 -- accel/accel.sh@20 -- # IFS=: 00:06:16.134 03:49:51 -- accel/accel.sh@20 -- # read -r var val 00:06:16.134 03:49:51 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:16.134 03:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.134 03:49:51 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:16.134 03:49:51 -- accel/accel.sh@20 -- # IFS=: 00:06:16.134 03:49:51 -- accel/accel.sh@20 -- # read -r var val 00:06:16.134 03:49:51 -- accel/accel.sh@21 -- # val=0 00:06:16.134 03:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.134 03:49:51 -- accel/accel.sh@20 -- # IFS=: 00:06:16.134 03:49:51 -- accel/accel.sh@20 -- # read -r var val 00:06:16.134 03:49:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:16.134 03:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.134 03:49:51 -- accel/accel.sh@20 -- # IFS=: 00:06:16.134 03:49:51 -- accel/accel.sh@20 -- # read -r var val 00:06:16.134 03:49:51 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:16.134 03:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.134 03:49:51 -- accel/accel.sh@20 -- # IFS=: 00:06:16.134 03:49:51 -- accel/accel.sh@20 -- # read -r var val 00:06:16.134 03:49:51 -- accel/accel.sh@21 -- # val= 00:06:16.134 03:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.134 03:49:51 -- accel/accel.sh@20 -- # IFS=: 00:06:16.134 03:49:51 -- accel/accel.sh@20 -- # read -r var val 00:06:16.134 03:49:51 -- accel/accel.sh@21 -- # val=software 00:06:16.134 03:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.134 03:49:51 -- accel/accel.sh@23 -- # accel_module=software 00:06:16.134 03:49:51 -- accel/accel.sh@20 -- # IFS=: 00:06:16.134 03:49:51 -- accel/accel.sh@20 -- # read -r var val 00:06:16.134 03:49:51 -- accel/accel.sh@21 -- # val=32 00:06:16.134 03:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.134 03:49:51 -- accel/accel.sh@20 -- # IFS=: 00:06:16.134 03:49:51 -- accel/accel.sh@20 -- # read -r var val 00:06:16.134 03:49:51 -- accel/accel.sh@21 -- # val=32 00:06:16.134 03:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.134 03:49:51 -- accel/accel.sh@20 -- # IFS=: 00:06:16.134 03:49:51 -- accel/accel.sh@20 -- # read -r var val 00:06:16.134 03:49:51 -- accel/accel.sh@21 -- # val=1 00:06:16.134 03:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.134 03:49:51 -- accel/accel.sh@20 -- # IFS=: 00:06:16.134 03:49:51 -- accel/accel.sh@20 -- # read -r var val 00:06:16.134 03:49:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:16.134 03:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.134 03:49:51 -- accel/accel.sh@20 -- # IFS=: 00:06:16.134 03:49:51 -- accel/accel.sh@20 -- # read -r var val 00:06:16.134 03:49:51 -- accel/accel.sh@21 -- # val=Yes 00:06:16.134 03:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.134 03:49:51 -- accel/accel.sh@20 -- # IFS=: 00:06:16.134 03:49:51 -- accel/accel.sh@20 -- # read -r var val 00:06:16.134 03:49:51 -- accel/accel.sh@21 -- # val= 00:06:16.134 03:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.134 03:49:51 -- accel/accel.sh@20 -- # IFS=: 00:06:16.134 03:49:51 -- accel/accel.sh@20 -- # read -r var val 00:06:16.134 03:49:51 -- accel/accel.sh@21 -- # val= 00:06:16.134 03:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.134 03:49:51 -- accel/accel.sh@20 -- # IFS=: 00:06:16.134 03:49:51 -- accel/accel.sh@20 -- # read -r var val 00:06:17.512 03:49:52 -- accel/accel.sh@21 -- # val= 00:06:17.512 03:49:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.512 03:49:52 -- accel/accel.sh@20 -- # IFS=: 00:06:17.512 03:49:52 -- accel/accel.sh@20 -- # read -r var val 00:06:17.512 03:49:52 -- accel/accel.sh@21 -- # val= 00:06:17.512 03:49:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.512 03:49:52 -- accel/accel.sh@20 -- # IFS=: 00:06:17.512 03:49:52 -- accel/accel.sh@20 -- # read -r var val 00:06:17.512 03:49:52 -- accel/accel.sh@21 -- # val= 00:06:17.512 03:49:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.512 03:49:52 -- accel/accel.sh@20 -- # IFS=: 00:06:17.512 03:49:52 -- accel/accel.sh@20 -- # read -r var val 00:06:17.512 03:49:52 -- accel/accel.sh@21 -- # val= 00:06:17.512 03:49:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.512 03:49:52 -- accel/accel.sh@20 -- # IFS=: 00:06:17.512 03:49:52 -- accel/accel.sh@20 -- # read -r var val 00:06:17.512 03:49:52 -- accel/accel.sh@21 -- # val= 00:06:17.512 03:49:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.512 03:49:52 -- accel/accel.sh@20 -- # IFS=: 00:06:17.512 03:49:52 -- accel/accel.sh@20 -- # read -r var val 00:06:17.512 03:49:52 -- accel/accel.sh@21 -- # val= 00:06:17.512 03:49:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.512 03:49:52 -- accel/accel.sh@20 -- # IFS=: 00:06:17.512 03:49:52 -- accel/accel.sh@20 -- # read -r var val 00:06:17.512 03:49:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:17.512 03:49:52 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:17.512 03:49:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.512 00:06:17.512 real 0m2.987s 00:06:17.512 user 0m2.523s 00:06:17.512 sys 0m0.263s 00:06:17.512 03:49:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:17.512 03:49:52 -- common/autotest_common.sh@10 -- # set +x 00:06:17.512 ************************************ 00:06:17.512 END TEST accel_copy_crc32c_C2 00:06:17.512 ************************************ 00:06:17.512 03:49:52 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:17.512 03:49:52 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:17.512 03:49:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.512 03:49:52 -- common/autotest_common.sh@10 -- # set +x 00:06:17.512 ************************************ 00:06:17.512 START TEST accel_dualcast 00:06:17.512 ************************************ 00:06:17.512 03:49:52 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:06:17.512 03:49:52 -- accel/accel.sh@16 -- # local accel_opc 00:06:17.512 03:49:52 -- accel/accel.sh@17 -- # local accel_module 00:06:17.512 03:49:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:17.512 03:49:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:17.512 03:49:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.512 03:49:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.512 03:49:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.512 03:49:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.512 03:49:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.512 03:49:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.512 03:49:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.512 03:49:52 -- accel/accel.sh@42 -- # jq -r . 00:06:17.512 [2024-11-08 03:49:52.453173] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:17.512 [2024-11-08 03:49:52.453988] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58844 ] 00:06:17.512 [2024-11-08 03:49:52.576106] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.771 [2024-11-08 03:49:52.652716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.147 03:49:53 -- accel/accel.sh@18 -- # out=' 00:06:19.147 SPDK Configuration: 00:06:19.147 Core mask: 0x1 00:06:19.147 00:06:19.147 Accel Perf Configuration: 00:06:19.147 Workload Type: dualcast 00:06:19.147 Transfer size: 4096 bytes 00:06:19.147 Vector count 1 00:06:19.147 Module: software 00:06:19.147 Queue depth: 32 00:06:19.147 Allocate depth: 32 00:06:19.147 # threads/core: 1 00:06:19.147 Run time: 1 seconds 00:06:19.147 Verify: Yes 00:06:19.147 00:06:19.147 Running for 1 seconds... 00:06:19.147 00:06:19.147 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:19.147 ------------------------------------------------------------------------------------ 00:06:19.147 0,0 420128/s 1641 MiB/s 0 0 00:06:19.147 ==================================================================================== 00:06:19.147 Total 420128/s 1641 MiB/s 0 0' 00:06:19.147 03:49:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:19.147 03:49:53 -- accel/accel.sh@20 -- # IFS=: 00:06:19.147 03:49:53 -- accel/accel.sh@20 -- # read -r var val 00:06:19.147 03:49:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:19.147 03:49:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.147 03:49:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.147 03:49:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.147 03:49:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.147 03:49:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.147 03:49:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.147 03:49:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.147 03:49:53 -- accel/accel.sh@42 -- # jq -r . 00:06:19.147 [2024-11-08 03:49:53.883200] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:19.147 [2024-11-08 03:49:53.883292] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58864 ] 00:06:19.147 [2024-11-08 03:49:54.005834] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.147 [2024-11-08 03:49:54.070242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.147 03:49:54 -- accel/accel.sh@21 -- # val= 00:06:19.147 03:49:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.147 03:49:54 -- accel/accel.sh@20 -- # IFS=: 00:06:19.147 03:49:54 -- accel/accel.sh@20 -- # read -r var val 00:06:19.147 03:49:54 -- accel/accel.sh@21 -- # val= 00:06:19.147 03:49:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.147 03:49:54 -- accel/accel.sh@20 -- # IFS=: 00:06:19.147 03:49:54 -- accel/accel.sh@20 -- # read -r var val 00:06:19.147 03:49:54 -- accel/accel.sh@21 -- # val=0x1 00:06:19.147 03:49:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.147 03:49:54 -- accel/accel.sh@20 -- # IFS=: 00:06:19.147 03:49:54 -- accel/accel.sh@20 -- # read -r var val 00:06:19.147 03:49:54 -- accel/accel.sh@21 -- # val= 00:06:19.147 03:49:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.147 03:49:54 -- accel/accel.sh@20 -- # IFS=: 00:06:19.147 03:49:54 -- accel/accel.sh@20 -- # read -r var val 00:06:19.147 03:49:54 -- accel/accel.sh@21 -- # val= 00:06:19.147 03:49:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.147 03:49:54 -- accel/accel.sh@20 -- # IFS=: 00:06:19.147 03:49:54 -- accel/accel.sh@20 -- # read -r var val 00:06:19.147 03:49:54 -- accel/accel.sh@21 -- # val=dualcast 00:06:19.147 03:49:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.147 03:49:54 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:19.147 03:49:54 -- accel/accel.sh@20 -- # IFS=: 00:06:19.147 03:49:54 -- accel/accel.sh@20 -- # read -r var val 00:06:19.147 03:49:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:19.147 03:49:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.147 03:49:54 -- accel/accel.sh@20 -- # IFS=: 00:06:19.147 03:49:54 -- accel/accel.sh@20 -- # read -r var val 00:06:19.147 03:49:54 -- accel/accel.sh@21 -- # val= 00:06:19.147 03:49:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.147 03:49:54 -- accel/accel.sh@20 -- # IFS=: 00:06:19.147 03:49:54 -- accel/accel.sh@20 -- # read -r var val 00:06:19.147 03:49:54 -- accel/accel.sh@21 -- # val=software 00:06:19.147 03:49:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.147 03:49:54 -- accel/accel.sh@23 -- # accel_module=software 00:06:19.147 03:49:54 -- accel/accel.sh@20 -- # IFS=: 00:06:19.147 03:49:54 -- accel/accel.sh@20 -- # read -r var val 00:06:19.147 03:49:54 -- accel/accel.sh@21 -- # val=32 00:06:19.147 03:49:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.147 03:49:54 -- accel/accel.sh@20 -- # IFS=: 00:06:19.147 03:49:54 -- accel/accel.sh@20 -- # read -r var val 00:06:19.147 03:49:54 -- accel/accel.sh@21 -- # val=32 00:06:19.147 03:49:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.147 03:49:54 -- accel/accel.sh@20 -- # IFS=: 00:06:19.148 03:49:54 -- accel/accel.sh@20 -- # read -r var val 00:06:19.148 03:49:54 -- accel/accel.sh@21 -- # val=1 00:06:19.148 03:49:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.148 03:49:54 -- accel/accel.sh@20 -- # IFS=: 00:06:19.148 03:49:54 -- accel/accel.sh@20 -- # read -r var val 00:06:19.148 03:49:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:19.148 03:49:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.148 03:49:54 -- accel/accel.sh@20 -- # IFS=: 00:06:19.148 03:49:54 -- accel/accel.sh@20 -- # read -r var val 00:06:19.148 03:49:54 -- accel/accel.sh@21 -- # val=Yes 00:06:19.148 03:49:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.148 03:49:54 -- accel/accel.sh@20 -- # IFS=: 00:06:19.148 03:49:54 -- accel/accel.sh@20 -- # read -r var val 00:06:19.148 03:49:54 -- accel/accel.sh@21 -- # val= 00:06:19.148 03:49:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.148 03:49:54 -- accel/accel.sh@20 -- # IFS=: 00:06:19.148 03:49:54 -- accel/accel.sh@20 -- # read -r var val 00:06:19.148 03:49:54 -- accel/accel.sh@21 -- # val= 00:06:19.148 03:49:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.148 03:49:54 -- accel/accel.sh@20 -- # IFS=: 00:06:19.148 03:49:54 -- accel/accel.sh@20 -- # read -r var val 00:06:20.527 03:49:55 -- accel/accel.sh@21 -- # val= 00:06:20.527 03:49:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.527 03:49:55 -- accel/accel.sh@20 -- # IFS=: 00:06:20.527 03:49:55 -- accel/accel.sh@20 -- # read -r var val 00:06:20.527 03:49:55 -- accel/accel.sh@21 -- # val= 00:06:20.527 03:49:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.527 03:49:55 -- accel/accel.sh@20 -- # IFS=: 00:06:20.527 03:49:55 -- accel/accel.sh@20 -- # read -r var val 00:06:20.527 03:49:55 -- accel/accel.sh@21 -- # val= 00:06:20.527 03:49:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.527 03:49:55 -- accel/accel.sh@20 -- # IFS=: 00:06:20.527 03:49:55 -- accel/accel.sh@20 -- # read -r var val 00:06:20.527 03:49:55 -- accel/accel.sh@21 -- # val= 00:06:20.527 03:49:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.527 03:49:55 -- accel/accel.sh@20 -- # IFS=: 00:06:20.527 03:49:55 -- accel/accel.sh@20 -- # read -r var val 00:06:20.527 03:49:55 -- accel/accel.sh@21 -- # val= 00:06:20.527 03:49:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.527 03:49:55 -- accel/accel.sh@20 -- # IFS=: 00:06:20.527 03:49:55 -- accel/accel.sh@20 -- # read -r var val 00:06:20.527 03:49:55 -- accel/accel.sh@21 -- # val= 00:06:20.527 03:49:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.527 03:49:55 -- accel/accel.sh@20 -- # IFS=: 00:06:20.527 03:49:55 -- accel/accel.sh@20 -- # read -r var val 00:06:20.527 03:49:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:20.527 03:49:55 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:20.527 03:49:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.527 00:06:20.527 real 0m2.856s 00:06:20.527 user 0m2.454s 00:06:20.527 sys 0m0.203s 00:06:20.527 03:49:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:20.527 03:49:55 -- common/autotest_common.sh@10 -- # set +x 00:06:20.527 ************************************ 00:06:20.527 END TEST accel_dualcast 00:06:20.527 ************************************ 00:06:20.527 03:49:55 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:20.527 03:49:55 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:20.527 03:49:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.527 03:49:55 -- common/autotest_common.sh@10 -- # set +x 00:06:20.527 ************************************ 00:06:20.527 START TEST accel_compare 00:06:20.527 ************************************ 00:06:20.527 03:49:55 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:06:20.527 03:49:55 -- accel/accel.sh@16 -- # local accel_opc 00:06:20.527 03:49:55 -- accel/accel.sh@17 -- # local accel_module 00:06:20.527 03:49:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:20.527 03:49:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:20.527 03:49:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.527 03:49:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.527 03:49:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.527 03:49:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.527 03:49:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.527 03:49:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.527 03:49:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.527 03:49:55 -- accel/accel.sh@42 -- # jq -r . 00:06:20.527 [2024-11-08 03:49:55.360241] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:20.527 [2024-11-08 03:49:55.360336] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58893 ] 00:06:20.527 [2024-11-08 03:49:55.487507] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.527 [2024-11-08 03:49:55.553549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.904 03:49:56 -- accel/accel.sh@18 -- # out=' 00:06:21.904 SPDK Configuration: 00:06:21.904 Core mask: 0x1 00:06:21.904 00:06:21.904 Accel Perf Configuration: 00:06:21.904 Workload Type: compare 00:06:21.904 Transfer size: 4096 bytes 00:06:21.904 Vector count 1 00:06:21.904 Module: software 00:06:21.904 Queue depth: 32 00:06:21.904 Allocate depth: 32 00:06:21.904 # threads/core: 1 00:06:21.904 Run time: 1 seconds 00:06:21.904 Verify: Yes 00:06:21.904 00:06:21.904 Running for 1 seconds... 00:06:21.904 00:06:21.904 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:21.904 ------------------------------------------------------------------------------------ 00:06:21.904 0,0 551040/s 2152 MiB/s 0 0 00:06:21.904 ==================================================================================== 00:06:21.904 Total 551040/s 2152 MiB/s 0 0' 00:06:21.904 03:49:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:21.904 03:49:56 -- accel/accel.sh@20 -- # IFS=: 00:06:21.904 03:49:56 -- accel/accel.sh@20 -- # read -r var val 00:06:21.904 03:49:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:21.904 03:49:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.904 03:49:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:21.904 03:49:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.904 03:49:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.904 03:49:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:21.904 03:49:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:21.904 03:49:56 -- accel/accel.sh@41 -- # local IFS=, 00:06:21.904 03:49:56 -- accel/accel.sh@42 -- # jq -r . 00:06:21.904 [2024-11-08 03:49:56.787504] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:21.904 [2024-11-08 03:49:56.787585] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58918 ] 00:06:21.904 [2024-11-08 03:49:56.916294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.904 [2024-11-08 03:49:56.981964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.163 03:49:57 -- accel/accel.sh@21 -- # val= 00:06:22.163 03:49:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.163 03:49:57 -- accel/accel.sh@20 -- # IFS=: 00:06:22.163 03:49:57 -- accel/accel.sh@20 -- # read -r var val 00:06:22.163 03:49:57 -- accel/accel.sh@21 -- # val= 00:06:22.163 03:49:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.163 03:49:57 -- accel/accel.sh@20 -- # IFS=: 00:06:22.163 03:49:57 -- accel/accel.sh@20 -- # read -r var val 00:06:22.163 03:49:57 -- accel/accel.sh@21 -- # val=0x1 00:06:22.163 03:49:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.163 03:49:57 -- accel/accel.sh@20 -- # IFS=: 00:06:22.163 03:49:57 -- accel/accel.sh@20 -- # read -r var val 00:06:22.163 03:49:57 -- accel/accel.sh@21 -- # val= 00:06:22.163 03:49:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.163 03:49:57 -- accel/accel.sh@20 -- # IFS=: 00:06:22.163 03:49:57 -- accel/accel.sh@20 -- # read -r var val 00:06:22.163 03:49:57 -- accel/accel.sh@21 -- # val= 00:06:22.163 03:49:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.163 03:49:57 -- accel/accel.sh@20 -- # IFS=: 00:06:22.163 03:49:57 -- accel/accel.sh@20 -- # read -r var val 00:06:22.163 03:49:57 -- accel/accel.sh@21 -- # val=compare 00:06:22.163 03:49:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.163 03:49:57 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:22.163 03:49:57 -- accel/accel.sh@20 -- # IFS=: 00:06:22.163 03:49:57 -- accel/accel.sh@20 -- # read -r var val 00:06:22.163 03:49:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:22.163 03:49:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.163 03:49:57 -- accel/accel.sh@20 -- # IFS=: 00:06:22.163 03:49:57 -- accel/accel.sh@20 -- # read -r var val 00:06:22.163 03:49:57 -- accel/accel.sh@21 -- # val= 00:06:22.163 03:49:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.163 03:49:57 -- accel/accel.sh@20 -- # IFS=: 00:06:22.163 03:49:57 -- accel/accel.sh@20 -- # read -r var val 00:06:22.163 03:49:57 -- accel/accel.sh@21 -- # val=software 00:06:22.163 03:49:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.163 03:49:57 -- accel/accel.sh@23 -- # accel_module=software 00:06:22.163 03:49:57 -- accel/accel.sh@20 -- # IFS=: 00:06:22.163 03:49:57 -- accel/accel.sh@20 -- # read -r var val 00:06:22.163 03:49:57 -- accel/accel.sh@21 -- # val=32 00:06:22.163 03:49:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.163 03:49:57 -- accel/accel.sh@20 -- # IFS=: 00:06:22.163 03:49:57 -- accel/accel.sh@20 -- # read -r var val 00:06:22.163 03:49:57 -- accel/accel.sh@21 -- # val=32 00:06:22.163 03:49:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.163 03:49:57 -- accel/accel.sh@20 -- # IFS=: 00:06:22.163 03:49:57 -- accel/accel.sh@20 -- # read -r var val 00:06:22.163 03:49:57 -- accel/accel.sh@21 -- # val=1 00:06:22.163 03:49:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.163 03:49:57 -- accel/accel.sh@20 -- # IFS=: 00:06:22.163 03:49:57 -- accel/accel.sh@20 -- # read -r var val 00:06:22.163 03:49:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:22.163 03:49:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.163 03:49:57 -- accel/accel.sh@20 -- # IFS=: 00:06:22.163 03:49:57 -- accel/accel.sh@20 -- # read -r var val 00:06:22.163 03:49:57 -- accel/accel.sh@21 -- # val=Yes 00:06:22.163 03:49:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.163 03:49:57 -- accel/accel.sh@20 -- # IFS=: 00:06:22.163 03:49:57 -- accel/accel.sh@20 -- # read -r var val 00:06:22.163 03:49:57 -- accel/accel.sh@21 -- # val= 00:06:22.163 03:49:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.163 03:49:57 -- accel/accel.sh@20 -- # IFS=: 00:06:22.163 03:49:57 -- accel/accel.sh@20 -- # read -r var val 00:06:22.163 03:49:57 -- accel/accel.sh@21 -- # val= 00:06:22.163 03:49:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.163 03:49:57 -- accel/accel.sh@20 -- # IFS=: 00:06:22.163 03:49:57 -- accel/accel.sh@20 -- # read -r var val 00:06:23.099 03:49:58 -- accel/accel.sh@21 -- # val= 00:06:23.100 03:49:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.100 03:49:58 -- accel/accel.sh@20 -- # IFS=: 00:06:23.100 03:49:58 -- accel/accel.sh@20 -- # read -r var val 00:06:23.100 03:49:58 -- accel/accel.sh@21 -- # val= 00:06:23.100 03:49:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.100 03:49:58 -- accel/accel.sh@20 -- # IFS=: 00:06:23.100 03:49:58 -- accel/accel.sh@20 -- # read -r var val 00:06:23.100 03:49:58 -- accel/accel.sh@21 -- # val= 00:06:23.100 03:49:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.100 03:49:58 -- accel/accel.sh@20 -- # IFS=: 00:06:23.100 03:49:58 -- accel/accel.sh@20 -- # read -r var val 00:06:23.100 03:49:58 -- accel/accel.sh@21 -- # val= 00:06:23.100 03:49:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.100 03:49:58 -- accel/accel.sh@20 -- # IFS=: 00:06:23.100 03:49:58 -- accel/accel.sh@20 -- # read -r var val 00:06:23.100 03:49:58 -- accel/accel.sh@21 -- # val= 00:06:23.100 03:49:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.100 03:49:58 -- accel/accel.sh@20 -- # IFS=: 00:06:23.100 03:49:58 -- accel/accel.sh@20 -- # read -r var val 00:06:23.100 03:49:58 -- accel/accel.sh@21 -- # val= 00:06:23.100 03:49:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.100 03:49:58 -- accel/accel.sh@20 -- # IFS=: 00:06:23.100 03:49:58 -- accel/accel.sh@20 -- # read -r var val 00:06:23.100 03:49:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:23.100 03:49:58 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:23.100 03:49:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.100 00:06:23.100 real 0m2.861s 00:06:23.100 user 0m2.446s 00:06:23.100 sys 0m0.209s 00:06:23.100 ************************************ 00:06:23.100 END TEST accel_compare 00:06:23.100 ************************************ 00:06:23.100 03:49:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:23.100 03:49:58 -- common/autotest_common.sh@10 -- # set +x 00:06:23.358 03:49:58 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:23.358 03:49:58 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:23.358 03:49:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.358 03:49:58 -- common/autotest_common.sh@10 -- # set +x 00:06:23.358 ************************************ 00:06:23.358 START TEST accel_xor 00:06:23.358 ************************************ 00:06:23.358 03:49:58 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:06:23.358 03:49:58 -- accel/accel.sh@16 -- # local accel_opc 00:06:23.358 03:49:58 -- accel/accel.sh@17 -- # local accel_module 00:06:23.358 03:49:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:23.358 03:49:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:23.358 03:49:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.358 03:49:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:23.358 03:49:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.358 03:49:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.358 03:49:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:23.358 03:49:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:23.358 03:49:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:23.358 03:49:58 -- accel/accel.sh@42 -- # jq -r . 00:06:23.358 [2024-11-08 03:49:58.281511] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:23.358 [2024-11-08 03:49:58.281611] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58947 ] 00:06:23.358 [2024-11-08 03:49:58.414783] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.616 [2024-11-08 03:49:58.483941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.992 03:49:59 -- accel/accel.sh@18 -- # out=' 00:06:24.992 SPDK Configuration: 00:06:24.992 Core mask: 0x1 00:06:24.992 00:06:24.992 Accel Perf Configuration: 00:06:24.992 Workload Type: xor 00:06:24.992 Source buffers: 2 00:06:24.992 Transfer size: 4096 bytes 00:06:24.992 Vector count 1 00:06:24.992 Module: software 00:06:24.992 Queue depth: 32 00:06:24.992 Allocate depth: 32 00:06:24.992 # threads/core: 1 00:06:24.992 Run time: 1 seconds 00:06:24.992 Verify: Yes 00:06:24.992 00:06:24.992 Running for 1 seconds... 00:06:24.992 00:06:24.992 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:24.992 ------------------------------------------------------------------------------------ 00:06:24.992 0,0 288320/s 1126 MiB/s 0 0 00:06:24.992 ==================================================================================== 00:06:24.992 Total 288320/s 1126 MiB/s 0 0' 00:06:24.992 03:49:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:24.992 03:49:59 -- accel/accel.sh@20 -- # IFS=: 00:06:24.992 03:49:59 -- accel/accel.sh@20 -- # read -r var val 00:06:24.992 03:49:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:24.992 03:49:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.992 03:49:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.992 03:49:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.992 03:49:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.992 03:49:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.992 03:49:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.992 03:49:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.992 03:49:59 -- accel/accel.sh@42 -- # jq -r . 00:06:24.992 [2024-11-08 03:49:59.722513] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:24.992 [2024-11-08 03:49:59.722593] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58966 ] 00:06:24.992 [2024-11-08 03:49:59.853258] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.992 [2024-11-08 03:49:59.920009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.992 03:49:59 -- accel/accel.sh@21 -- # val= 00:06:24.992 03:49:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.992 03:49:59 -- accel/accel.sh@20 -- # IFS=: 00:06:24.992 03:49:59 -- accel/accel.sh@20 -- # read -r var val 00:06:24.992 03:49:59 -- accel/accel.sh@21 -- # val= 00:06:24.992 03:49:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.992 03:49:59 -- accel/accel.sh@20 -- # IFS=: 00:06:24.992 03:49:59 -- accel/accel.sh@20 -- # read -r var val 00:06:24.992 03:49:59 -- accel/accel.sh@21 -- # val=0x1 00:06:24.992 03:49:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.992 03:49:59 -- accel/accel.sh@20 -- # IFS=: 00:06:24.992 03:49:59 -- accel/accel.sh@20 -- # read -r var val 00:06:24.992 03:49:59 -- accel/accel.sh@21 -- # val= 00:06:24.992 03:49:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.992 03:49:59 -- accel/accel.sh@20 -- # IFS=: 00:06:24.992 03:49:59 -- accel/accel.sh@20 -- # read -r var val 00:06:24.992 03:49:59 -- accel/accel.sh@21 -- # val= 00:06:24.992 03:49:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.992 03:49:59 -- accel/accel.sh@20 -- # IFS=: 00:06:24.992 03:49:59 -- accel/accel.sh@20 -- # read -r var val 00:06:24.992 03:49:59 -- accel/accel.sh@21 -- # val=xor 00:06:24.992 03:49:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.992 03:49:59 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:24.992 03:49:59 -- accel/accel.sh@20 -- # IFS=: 00:06:24.992 03:49:59 -- accel/accel.sh@20 -- # read -r var val 00:06:24.992 03:49:59 -- accel/accel.sh@21 -- # val=2 00:06:24.992 03:49:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.992 03:49:59 -- accel/accel.sh@20 -- # IFS=: 00:06:24.992 03:49:59 -- accel/accel.sh@20 -- # read -r var val 00:06:24.992 03:49:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:24.992 03:49:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.992 03:49:59 -- accel/accel.sh@20 -- # IFS=: 00:06:24.992 03:49:59 -- accel/accel.sh@20 -- # read -r var val 00:06:24.992 03:49:59 -- accel/accel.sh@21 -- # val= 00:06:24.992 03:49:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.992 03:49:59 -- accel/accel.sh@20 -- # IFS=: 00:06:24.992 03:49:59 -- accel/accel.sh@20 -- # read -r var val 00:06:24.992 03:49:59 -- accel/accel.sh@21 -- # val=software 00:06:24.992 03:49:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.992 03:49:59 -- accel/accel.sh@23 -- # accel_module=software 00:06:24.992 03:49:59 -- accel/accel.sh@20 -- # IFS=: 00:06:24.992 03:49:59 -- accel/accel.sh@20 -- # read -r var val 00:06:24.992 03:49:59 -- accel/accel.sh@21 -- # val=32 00:06:24.992 03:49:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.992 03:49:59 -- accel/accel.sh@20 -- # IFS=: 00:06:24.992 03:49:59 -- accel/accel.sh@20 -- # read -r var val 00:06:24.992 03:49:59 -- accel/accel.sh@21 -- # val=32 00:06:24.992 03:49:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.992 03:49:59 -- accel/accel.sh@20 -- # IFS=: 00:06:24.992 03:49:59 -- accel/accel.sh@20 -- # read -r var val 00:06:24.992 03:49:59 -- accel/accel.sh@21 -- # val=1 00:06:24.992 03:49:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.992 03:49:59 -- accel/accel.sh@20 -- # IFS=: 00:06:24.992 03:49:59 -- accel/accel.sh@20 -- # read -r var val 00:06:24.992 03:49:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:24.992 03:49:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.992 03:49:59 -- accel/accel.sh@20 -- # IFS=: 00:06:24.992 03:49:59 -- accel/accel.sh@20 -- # read -r var val 00:06:24.992 03:49:59 -- accel/accel.sh@21 -- # val=Yes 00:06:24.992 03:49:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.992 03:49:59 -- accel/accel.sh@20 -- # IFS=: 00:06:24.992 03:49:59 -- accel/accel.sh@20 -- # read -r var val 00:06:24.992 03:49:59 -- accel/accel.sh@21 -- # val= 00:06:24.992 03:49:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.992 03:49:59 -- accel/accel.sh@20 -- # IFS=: 00:06:24.992 03:49:59 -- accel/accel.sh@20 -- # read -r var val 00:06:24.992 03:49:59 -- accel/accel.sh@21 -- # val= 00:06:24.993 03:49:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.993 03:49:59 -- accel/accel.sh@20 -- # IFS=: 00:06:24.993 03:49:59 -- accel/accel.sh@20 -- # read -r var val 00:06:26.379 03:50:01 -- accel/accel.sh@21 -- # val= 00:06:26.379 03:50:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.379 03:50:01 -- accel/accel.sh@20 -- # IFS=: 00:06:26.379 03:50:01 -- accel/accel.sh@20 -- # read -r var val 00:06:26.379 03:50:01 -- accel/accel.sh@21 -- # val= 00:06:26.379 03:50:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.379 03:50:01 -- accel/accel.sh@20 -- # IFS=: 00:06:26.379 03:50:01 -- accel/accel.sh@20 -- # read -r var val 00:06:26.379 03:50:01 -- accel/accel.sh@21 -- # val= 00:06:26.379 03:50:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.379 03:50:01 -- accel/accel.sh@20 -- # IFS=: 00:06:26.379 03:50:01 -- accel/accel.sh@20 -- # read -r var val 00:06:26.379 03:50:01 -- accel/accel.sh@21 -- # val= 00:06:26.379 ************************************ 00:06:26.379 END TEST accel_xor 00:06:26.379 ************************************ 00:06:26.379 03:50:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.379 03:50:01 -- accel/accel.sh@20 -- # IFS=: 00:06:26.379 03:50:01 -- accel/accel.sh@20 -- # read -r var val 00:06:26.379 03:50:01 -- accel/accel.sh@21 -- # val= 00:06:26.379 03:50:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.379 03:50:01 -- accel/accel.sh@20 -- # IFS=: 00:06:26.379 03:50:01 -- accel/accel.sh@20 -- # read -r var val 00:06:26.379 03:50:01 -- accel/accel.sh@21 -- # val= 00:06:26.379 03:50:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.379 03:50:01 -- accel/accel.sh@20 -- # IFS=: 00:06:26.379 03:50:01 -- accel/accel.sh@20 -- # read -r var val 00:06:26.379 03:50:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:26.379 03:50:01 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:26.379 03:50:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.379 00:06:26.379 real 0m2.889s 00:06:26.379 user 0m2.467s 00:06:26.379 sys 0m0.221s 00:06:26.379 03:50:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:26.379 03:50:01 -- common/autotest_common.sh@10 -- # set +x 00:06:26.379 03:50:01 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:26.379 03:50:01 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:26.379 03:50:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.379 03:50:01 -- common/autotest_common.sh@10 -- # set +x 00:06:26.379 ************************************ 00:06:26.379 START TEST accel_xor 00:06:26.379 ************************************ 00:06:26.379 03:50:01 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:06:26.379 03:50:01 -- accel/accel.sh@16 -- # local accel_opc 00:06:26.379 03:50:01 -- accel/accel.sh@17 -- # local accel_module 00:06:26.379 03:50:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:26.379 03:50:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:26.379 03:50:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.379 03:50:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.379 03:50:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.379 03:50:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.379 03:50:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.379 03:50:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.379 03:50:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.379 03:50:01 -- accel/accel.sh@42 -- # jq -r . 00:06:26.379 [2024-11-08 03:50:01.223927] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:26.379 [2024-11-08 03:50:01.224022] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59001 ] 00:06:26.379 [2024-11-08 03:50:01.361975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.379 [2024-11-08 03:50:01.427501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.772 03:50:02 -- accel/accel.sh@18 -- # out=' 00:06:27.772 SPDK Configuration: 00:06:27.772 Core mask: 0x1 00:06:27.772 00:06:27.772 Accel Perf Configuration: 00:06:27.772 Workload Type: xor 00:06:27.772 Source buffers: 3 00:06:27.772 Transfer size: 4096 bytes 00:06:27.772 Vector count 1 00:06:27.772 Module: software 00:06:27.772 Queue depth: 32 00:06:27.772 Allocate depth: 32 00:06:27.772 # threads/core: 1 00:06:27.772 Run time: 1 seconds 00:06:27.772 Verify: Yes 00:06:27.772 00:06:27.772 Running for 1 seconds... 00:06:27.772 00:06:27.772 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:27.772 ------------------------------------------------------------------------------------ 00:06:27.772 0,0 268320/s 1048 MiB/s 0 0 00:06:27.772 ==================================================================================== 00:06:27.772 Total 268320/s 1048 MiB/s 0 0' 00:06:27.772 03:50:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:27.772 03:50:02 -- accel/accel.sh@20 -- # IFS=: 00:06:27.772 03:50:02 -- accel/accel.sh@20 -- # read -r var val 00:06:27.772 03:50:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:27.772 03:50:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.772 03:50:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.772 03:50:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.772 03:50:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.772 03:50:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.772 03:50:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.772 03:50:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.772 03:50:02 -- accel/accel.sh@42 -- # jq -r . 00:06:27.772 [2024-11-08 03:50:02.657821] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:27.772 [2024-11-08 03:50:02.657918] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59015 ] 00:06:27.772 [2024-11-08 03:50:02.784338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.772 [2024-11-08 03:50:02.861129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.031 03:50:02 -- accel/accel.sh@21 -- # val= 00:06:28.031 03:50:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.031 03:50:02 -- accel/accel.sh@20 -- # IFS=: 00:06:28.031 03:50:02 -- accel/accel.sh@20 -- # read -r var val 00:06:28.031 03:50:02 -- accel/accel.sh@21 -- # val= 00:06:28.031 03:50:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.031 03:50:02 -- accel/accel.sh@20 -- # IFS=: 00:06:28.031 03:50:02 -- accel/accel.sh@20 -- # read -r var val 00:06:28.031 03:50:02 -- accel/accel.sh@21 -- # val=0x1 00:06:28.031 03:50:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.031 03:50:02 -- accel/accel.sh@20 -- # IFS=: 00:06:28.031 03:50:02 -- accel/accel.sh@20 -- # read -r var val 00:06:28.031 03:50:02 -- accel/accel.sh@21 -- # val= 00:06:28.031 03:50:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.031 03:50:02 -- accel/accel.sh@20 -- # IFS=: 00:06:28.031 03:50:02 -- accel/accel.sh@20 -- # read -r var val 00:06:28.031 03:50:02 -- accel/accel.sh@21 -- # val= 00:06:28.031 03:50:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.031 03:50:02 -- accel/accel.sh@20 -- # IFS=: 00:06:28.031 03:50:02 -- accel/accel.sh@20 -- # read -r var val 00:06:28.031 03:50:02 -- accel/accel.sh@21 -- # val=xor 00:06:28.031 03:50:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.031 03:50:02 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:28.031 03:50:02 -- accel/accel.sh@20 -- # IFS=: 00:06:28.031 03:50:02 -- accel/accel.sh@20 -- # read -r var val 00:06:28.031 03:50:02 -- accel/accel.sh@21 -- # val=3 00:06:28.031 03:50:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.031 03:50:02 -- accel/accel.sh@20 -- # IFS=: 00:06:28.031 03:50:02 -- accel/accel.sh@20 -- # read -r var val 00:06:28.031 03:50:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:28.032 03:50:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.032 03:50:02 -- accel/accel.sh@20 -- # IFS=: 00:06:28.032 03:50:02 -- accel/accel.sh@20 -- # read -r var val 00:06:28.032 03:50:02 -- accel/accel.sh@21 -- # val= 00:06:28.032 03:50:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.032 03:50:02 -- accel/accel.sh@20 -- # IFS=: 00:06:28.032 03:50:02 -- accel/accel.sh@20 -- # read -r var val 00:06:28.032 03:50:02 -- accel/accel.sh@21 -- # val=software 00:06:28.032 03:50:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.032 03:50:02 -- accel/accel.sh@23 -- # accel_module=software 00:06:28.032 03:50:02 -- accel/accel.sh@20 -- # IFS=: 00:06:28.032 03:50:02 -- accel/accel.sh@20 -- # read -r var val 00:06:28.032 03:50:02 -- accel/accel.sh@21 -- # val=32 00:06:28.032 03:50:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.032 03:50:02 -- accel/accel.sh@20 -- # IFS=: 00:06:28.032 03:50:02 -- accel/accel.sh@20 -- # read -r var val 00:06:28.032 03:50:02 -- accel/accel.sh@21 -- # val=32 00:06:28.032 03:50:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.032 03:50:02 -- accel/accel.sh@20 -- # IFS=: 00:06:28.032 03:50:02 -- accel/accel.sh@20 -- # read -r var val 00:06:28.032 03:50:02 -- accel/accel.sh@21 -- # val=1 00:06:28.032 03:50:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.032 03:50:02 -- accel/accel.sh@20 -- # IFS=: 00:06:28.032 03:50:02 -- accel/accel.sh@20 -- # read -r var val 00:06:28.032 03:50:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:28.032 03:50:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.032 03:50:02 -- accel/accel.sh@20 -- # IFS=: 00:06:28.032 03:50:02 -- accel/accel.sh@20 -- # read -r var val 00:06:28.032 03:50:02 -- accel/accel.sh@21 -- # val=Yes 00:06:28.032 03:50:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.032 03:50:02 -- accel/accel.sh@20 -- # IFS=: 00:06:28.032 03:50:02 -- accel/accel.sh@20 -- # read -r var val 00:06:28.032 03:50:02 -- accel/accel.sh@21 -- # val= 00:06:28.032 03:50:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.032 03:50:02 -- accel/accel.sh@20 -- # IFS=: 00:06:28.032 03:50:02 -- accel/accel.sh@20 -- # read -r var val 00:06:28.032 03:50:02 -- accel/accel.sh@21 -- # val= 00:06:28.032 03:50:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.032 03:50:02 -- accel/accel.sh@20 -- # IFS=: 00:06:28.032 03:50:02 -- accel/accel.sh@20 -- # read -r var val 00:06:28.969 03:50:04 -- accel/accel.sh@21 -- # val= 00:06:28.969 03:50:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.969 03:50:04 -- accel/accel.sh@20 -- # IFS=: 00:06:28.969 03:50:04 -- accel/accel.sh@20 -- # read -r var val 00:06:28.969 03:50:04 -- accel/accel.sh@21 -- # val= 00:06:28.969 03:50:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.969 03:50:04 -- accel/accel.sh@20 -- # IFS=: 00:06:28.969 03:50:04 -- accel/accel.sh@20 -- # read -r var val 00:06:28.969 03:50:04 -- accel/accel.sh@21 -- # val= 00:06:28.969 03:50:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.969 03:50:04 -- accel/accel.sh@20 -- # IFS=: 00:06:29.228 03:50:04 -- accel/accel.sh@20 -- # read -r var val 00:06:29.228 03:50:04 -- accel/accel.sh@21 -- # val= 00:06:29.228 ************************************ 00:06:29.228 END TEST accel_xor 00:06:29.228 ************************************ 00:06:29.228 03:50:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.228 03:50:04 -- accel/accel.sh@20 -- # IFS=: 00:06:29.228 03:50:04 -- accel/accel.sh@20 -- # read -r var val 00:06:29.228 03:50:04 -- accel/accel.sh@21 -- # val= 00:06:29.228 03:50:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.228 03:50:04 -- accel/accel.sh@20 -- # IFS=: 00:06:29.228 03:50:04 -- accel/accel.sh@20 -- # read -r var val 00:06:29.228 03:50:04 -- accel/accel.sh@21 -- # val= 00:06:29.228 03:50:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.228 03:50:04 -- accel/accel.sh@20 -- # IFS=: 00:06:29.228 03:50:04 -- accel/accel.sh@20 -- # read -r var val 00:06:29.228 03:50:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:29.228 03:50:04 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:29.228 03:50:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.228 00:06:29.228 real 0m2.884s 00:06:29.228 user 0m2.474s 00:06:29.228 sys 0m0.208s 00:06:29.228 03:50:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:29.228 03:50:04 -- common/autotest_common.sh@10 -- # set +x 00:06:29.228 03:50:04 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:29.228 03:50:04 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:29.228 03:50:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.228 03:50:04 -- common/autotest_common.sh@10 -- # set +x 00:06:29.228 ************************************ 00:06:29.228 START TEST accel_dif_verify 00:06:29.228 ************************************ 00:06:29.228 03:50:04 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:06:29.228 03:50:04 -- accel/accel.sh@16 -- # local accel_opc 00:06:29.228 03:50:04 -- accel/accel.sh@17 -- # local accel_module 00:06:29.228 03:50:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:29.228 03:50:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:29.228 03:50:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.228 03:50:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.228 03:50:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.228 03:50:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.228 03:50:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.228 03:50:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.228 03:50:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.228 03:50:04 -- accel/accel.sh@42 -- # jq -r . 00:06:29.228 [2024-11-08 03:50:04.156994] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:29.228 [2024-11-08 03:50:04.157234] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59055 ] 00:06:29.228 [2024-11-08 03:50:04.279825] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.487 [2024-11-08 03:50:04.347688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.865 03:50:05 -- accel/accel.sh@18 -- # out=' 00:06:30.865 SPDK Configuration: 00:06:30.865 Core mask: 0x1 00:06:30.865 00:06:30.865 Accel Perf Configuration: 00:06:30.865 Workload Type: dif_verify 00:06:30.865 Vector size: 4096 bytes 00:06:30.865 Transfer size: 4096 bytes 00:06:30.865 Block size: 512 bytes 00:06:30.865 Metadata size: 8 bytes 00:06:30.865 Vector count 1 00:06:30.865 Module: software 00:06:30.865 Queue depth: 32 00:06:30.865 Allocate depth: 32 00:06:30.865 # threads/core: 1 00:06:30.865 Run time: 1 seconds 00:06:30.865 Verify: No 00:06:30.865 00:06:30.865 Running for 1 seconds... 00:06:30.865 00:06:30.865 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:30.865 ------------------------------------------------------------------------------------ 00:06:30.865 0,0 124672/s 494 MiB/s 0 0 00:06:30.865 ==================================================================================== 00:06:30.865 Total 124672/s 487 MiB/s 0 0' 00:06:30.865 03:50:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:30.865 03:50:05 -- accel/accel.sh@20 -- # IFS=: 00:06:30.865 03:50:05 -- accel/accel.sh@20 -- # read -r var val 00:06:30.865 03:50:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:30.865 03:50:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.865 03:50:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.865 03:50:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.865 03:50:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.865 03:50:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.865 03:50:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.865 03:50:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.865 03:50:05 -- accel/accel.sh@42 -- # jq -r . 00:06:30.866 [2024-11-08 03:50:05.575332] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:30.866 [2024-11-08 03:50:05.575674] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59069 ] 00:06:30.866 [2024-11-08 03:50:05.698309] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.866 [2024-11-08 03:50:05.763202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.866 03:50:05 -- accel/accel.sh@21 -- # val= 00:06:30.866 03:50:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.866 03:50:05 -- accel/accel.sh@20 -- # IFS=: 00:06:30.866 03:50:05 -- accel/accel.sh@20 -- # read -r var val 00:06:30.866 03:50:05 -- accel/accel.sh@21 -- # val= 00:06:30.866 03:50:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.866 03:50:05 -- accel/accel.sh@20 -- # IFS=: 00:06:30.866 03:50:05 -- accel/accel.sh@20 -- # read -r var val 00:06:30.866 03:50:05 -- accel/accel.sh@21 -- # val=0x1 00:06:30.866 03:50:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.866 03:50:05 -- accel/accel.sh@20 -- # IFS=: 00:06:30.866 03:50:05 -- accel/accel.sh@20 -- # read -r var val 00:06:30.866 03:50:05 -- accel/accel.sh@21 -- # val= 00:06:30.866 03:50:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.866 03:50:05 -- accel/accel.sh@20 -- # IFS=: 00:06:30.866 03:50:05 -- accel/accel.sh@20 -- # read -r var val 00:06:30.866 03:50:05 -- accel/accel.sh@21 -- # val= 00:06:30.866 03:50:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.866 03:50:05 -- accel/accel.sh@20 -- # IFS=: 00:06:30.866 03:50:05 -- accel/accel.sh@20 -- # read -r var val 00:06:30.866 03:50:05 -- accel/accel.sh@21 -- # val=dif_verify 00:06:30.866 03:50:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.866 03:50:05 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:30.866 03:50:05 -- accel/accel.sh@20 -- # IFS=: 00:06:30.866 03:50:05 -- accel/accel.sh@20 -- # read -r var val 00:06:30.866 03:50:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:30.866 03:50:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.866 03:50:05 -- accel/accel.sh@20 -- # IFS=: 00:06:30.866 03:50:05 -- accel/accel.sh@20 -- # read -r var val 00:06:30.866 03:50:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:30.866 03:50:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.866 03:50:05 -- accel/accel.sh@20 -- # IFS=: 00:06:30.866 03:50:05 -- accel/accel.sh@20 -- # read -r var val 00:06:30.866 03:50:05 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:30.866 03:50:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.866 03:50:05 -- accel/accel.sh@20 -- # IFS=: 00:06:30.866 03:50:05 -- accel/accel.sh@20 -- # read -r var val 00:06:30.866 03:50:05 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:30.866 03:50:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.866 03:50:05 -- accel/accel.sh@20 -- # IFS=: 00:06:30.866 03:50:05 -- accel/accel.sh@20 -- # read -r var val 00:06:30.866 03:50:05 -- accel/accel.sh@21 -- # val= 00:06:30.866 03:50:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.866 03:50:05 -- accel/accel.sh@20 -- # IFS=: 00:06:30.866 03:50:05 -- accel/accel.sh@20 -- # read -r var val 00:06:30.866 03:50:05 -- accel/accel.sh@21 -- # val=software 00:06:30.866 03:50:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.866 03:50:05 -- accel/accel.sh@23 -- # accel_module=software 00:06:30.866 03:50:05 -- accel/accel.sh@20 -- # IFS=: 00:06:30.866 03:50:05 -- accel/accel.sh@20 -- # read -r var val 00:06:30.866 03:50:05 -- accel/accel.sh@21 -- # val=32 00:06:30.866 03:50:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.866 03:50:05 -- accel/accel.sh@20 -- # IFS=: 00:06:30.866 03:50:05 -- accel/accel.sh@20 -- # read -r var val 00:06:30.866 03:50:05 -- accel/accel.sh@21 -- # val=32 00:06:30.866 03:50:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.866 03:50:05 -- accel/accel.sh@20 -- # IFS=: 00:06:30.866 03:50:05 -- accel/accel.sh@20 -- # read -r var val 00:06:30.866 03:50:05 -- accel/accel.sh@21 -- # val=1 00:06:30.866 03:50:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.866 03:50:05 -- accel/accel.sh@20 -- # IFS=: 00:06:30.866 03:50:05 -- accel/accel.sh@20 -- # read -r var val 00:06:30.866 03:50:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:30.866 03:50:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.866 03:50:05 -- accel/accel.sh@20 -- # IFS=: 00:06:30.866 03:50:05 -- accel/accel.sh@20 -- # read -r var val 00:06:30.866 03:50:05 -- accel/accel.sh@21 -- # val=No 00:06:30.866 03:50:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.866 03:50:05 -- accel/accel.sh@20 -- # IFS=: 00:06:30.866 03:50:05 -- accel/accel.sh@20 -- # read -r var val 00:06:30.866 03:50:05 -- accel/accel.sh@21 -- # val= 00:06:30.866 03:50:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.866 03:50:05 -- accel/accel.sh@20 -- # IFS=: 00:06:30.866 03:50:05 -- accel/accel.sh@20 -- # read -r var val 00:06:30.866 03:50:05 -- accel/accel.sh@21 -- # val= 00:06:30.866 03:50:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.866 03:50:05 -- accel/accel.sh@20 -- # IFS=: 00:06:30.866 03:50:05 -- accel/accel.sh@20 -- # read -r var val 00:06:32.243 03:50:06 -- accel/accel.sh@21 -- # val= 00:06:32.243 03:50:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.243 03:50:06 -- accel/accel.sh@20 -- # IFS=: 00:06:32.243 03:50:06 -- accel/accel.sh@20 -- # read -r var val 00:06:32.243 03:50:06 -- accel/accel.sh@21 -- # val= 00:06:32.243 03:50:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.243 03:50:06 -- accel/accel.sh@20 -- # IFS=: 00:06:32.243 03:50:06 -- accel/accel.sh@20 -- # read -r var val 00:06:32.243 03:50:06 -- accel/accel.sh@21 -- # val= 00:06:32.243 03:50:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.243 03:50:06 -- accel/accel.sh@20 -- # IFS=: 00:06:32.243 03:50:06 -- accel/accel.sh@20 -- # read -r var val 00:06:32.243 03:50:06 -- accel/accel.sh@21 -- # val= 00:06:32.243 03:50:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.243 03:50:06 -- accel/accel.sh@20 -- # IFS=: 00:06:32.243 03:50:06 -- accel/accel.sh@20 -- # read -r var val 00:06:32.243 03:50:06 -- accel/accel.sh@21 -- # val= 00:06:32.243 03:50:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.243 03:50:06 -- accel/accel.sh@20 -- # IFS=: 00:06:32.243 03:50:06 -- accel/accel.sh@20 -- # read -r var val 00:06:32.243 03:50:06 -- accel/accel.sh@21 -- # val= 00:06:32.243 03:50:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.243 03:50:06 -- accel/accel.sh@20 -- # IFS=: 00:06:32.243 03:50:06 -- accel/accel.sh@20 -- # read -r var val 00:06:32.243 03:50:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:32.243 03:50:06 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:32.243 03:50:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.243 00:06:32.243 real 0m2.864s 00:06:32.243 user 0m2.472s 00:06:32.243 sys 0m0.192s 00:06:32.243 03:50:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:32.243 03:50:06 -- common/autotest_common.sh@10 -- # set +x 00:06:32.243 ************************************ 00:06:32.243 END TEST accel_dif_verify 00:06:32.243 ************************************ 00:06:32.243 03:50:07 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:32.243 03:50:07 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:32.243 03:50:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.243 03:50:07 -- common/autotest_common.sh@10 -- # set +x 00:06:32.243 ************************************ 00:06:32.243 START TEST accel_dif_generate 00:06:32.243 ************************************ 00:06:32.243 03:50:07 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:06:32.243 03:50:07 -- accel/accel.sh@16 -- # local accel_opc 00:06:32.243 03:50:07 -- accel/accel.sh@17 -- # local accel_module 00:06:32.243 03:50:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:32.243 03:50:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:32.243 03:50:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.243 03:50:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.243 03:50:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.243 03:50:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.243 03:50:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.243 03:50:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.243 03:50:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.243 03:50:07 -- accel/accel.sh@42 -- # jq -r . 00:06:32.243 [2024-11-08 03:50:07.069058] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:32.243 [2024-11-08 03:50:07.069294] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59109 ] 00:06:32.243 [2024-11-08 03:50:07.206412] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.243 [2024-11-08 03:50:07.272346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.622 03:50:08 -- accel/accel.sh@18 -- # out=' 00:06:33.622 SPDK Configuration: 00:06:33.622 Core mask: 0x1 00:06:33.622 00:06:33.622 Accel Perf Configuration: 00:06:33.622 Workload Type: dif_generate 00:06:33.622 Vector size: 4096 bytes 00:06:33.622 Transfer size: 4096 bytes 00:06:33.622 Block size: 512 bytes 00:06:33.622 Metadata size: 8 bytes 00:06:33.622 Vector count 1 00:06:33.622 Module: software 00:06:33.622 Queue depth: 32 00:06:33.622 Allocate depth: 32 00:06:33.622 # threads/core: 1 00:06:33.622 Run time: 1 seconds 00:06:33.622 Verify: No 00:06:33.622 00:06:33.622 Running for 1 seconds... 00:06:33.622 00:06:33.622 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:33.622 ------------------------------------------------------------------------------------ 00:06:33.622 0,0 147840/s 586 MiB/s 0 0 00:06:33.622 ==================================================================================== 00:06:33.622 Total 147840/s 577 MiB/s 0 0' 00:06:33.622 03:50:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:33.622 03:50:08 -- accel/accel.sh@20 -- # IFS=: 00:06:33.622 03:50:08 -- accel/accel.sh@20 -- # read -r var val 00:06:33.622 03:50:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:33.622 03:50:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.622 03:50:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:33.622 03:50:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.622 03:50:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.622 03:50:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:33.622 03:50:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:33.622 03:50:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:33.622 03:50:08 -- accel/accel.sh@42 -- # jq -r . 00:06:33.622 [2024-11-08 03:50:08.499916] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:33.622 [2024-11-08 03:50:08.500153] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59123 ] 00:06:33.622 [2024-11-08 03:50:08.623147] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.622 [2024-11-08 03:50:08.690029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.881 03:50:08 -- accel/accel.sh@21 -- # val= 00:06:33.881 03:50:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.881 03:50:08 -- accel/accel.sh@20 -- # IFS=: 00:06:33.881 03:50:08 -- accel/accel.sh@20 -- # read -r var val 00:06:33.881 03:50:08 -- accel/accel.sh@21 -- # val= 00:06:33.881 03:50:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.881 03:50:08 -- accel/accel.sh@20 -- # IFS=: 00:06:33.881 03:50:08 -- accel/accel.sh@20 -- # read -r var val 00:06:33.881 03:50:08 -- accel/accel.sh@21 -- # val=0x1 00:06:33.881 03:50:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.881 03:50:08 -- accel/accel.sh@20 -- # IFS=: 00:06:33.881 03:50:08 -- accel/accel.sh@20 -- # read -r var val 00:06:33.881 03:50:08 -- accel/accel.sh@21 -- # val= 00:06:33.881 03:50:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.881 03:50:08 -- accel/accel.sh@20 -- # IFS=: 00:06:33.881 03:50:08 -- accel/accel.sh@20 -- # read -r var val 00:06:33.881 03:50:08 -- accel/accel.sh@21 -- # val= 00:06:33.881 03:50:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.881 03:50:08 -- accel/accel.sh@20 -- # IFS=: 00:06:33.881 03:50:08 -- accel/accel.sh@20 -- # read -r var val 00:06:33.881 03:50:08 -- accel/accel.sh@21 -- # val=dif_generate 00:06:33.881 03:50:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.881 03:50:08 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:33.881 03:50:08 -- accel/accel.sh@20 -- # IFS=: 00:06:33.881 03:50:08 -- accel/accel.sh@20 -- # read -r var val 00:06:33.881 03:50:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:33.881 03:50:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.881 03:50:08 -- accel/accel.sh@20 -- # IFS=: 00:06:33.881 03:50:08 -- accel/accel.sh@20 -- # read -r var val 00:06:33.881 03:50:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:33.881 03:50:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.881 03:50:08 -- accel/accel.sh@20 -- # IFS=: 00:06:33.881 03:50:08 -- accel/accel.sh@20 -- # read -r var val 00:06:33.881 03:50:08 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:33.881 03:50:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.881 03:50:08 -- accel/accel.sh@20 -- # IFS=: 00:06:33.881 03:50:08 -- accel/accel.sh@20 -- # read -r var val 00:06:33.881 03:50:08 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:33.881 03:50:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.881 03:50:08 -- accel/accel.sh@20 -- # IFS=: 00:06:33.881 03:50:08 -- accel/accel.sh@20 -- # read -r var val 00:06:33.881 03:50:08 -- accel/accel.sh@21 -- # val= 00:06:33.881 03:50:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.881 03:50:08 -- accel/accel.sh@20 -- # IFS=: 00:06:33.881 03:50:08 -- accel/accel.sh@20 -- # read -r var val 00:06:33.881 03:50:08 -- accel/accel.sh@21 -- # val=software 00:06:33.881 03:50:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.881 03:50:08 -- accel/accel.sh@23 -- # accel_module=software 00:06:33.881 03:50:08 -- accel/accel.sh@20 -- # IFS=: 00:06:33.881 03:50:08 -- accel/accel.sh@20 -- # read -r var val 00:06:33.881 03:50:08 -- accel/accel.sh@21 -- # val=32 00:06:33.881 03:50:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.881 03:50:08 -- accel/accel.sh@20 -- # IFS=: 00:06:33.881 03:50:08 -- accel/accel.sh@20 -- # read -r var val 00:06:33.881 03:50:08 -- accel/accel.sh@21 -- # val=32 00:06:33.881 03:50:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.881 03:50:08 -- accel/accel.sh@20 -- # IFS=: 00:06:33.881 03:50:08 -- accel/accel.sh@20 -- # read -r var val 00:06:33.881 03:50:08 -- accel/accel.sh@21 -- # val=1 00:06:33.881 03:50:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.881 03:50:08 -- accel/accel.sh@20 -- # IFS=: 00:06:33.881 03:50:08 -- accel/accel.sh@20 -- # read -r var val 00:06:33.881 03:50:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:33.881 03:50:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.881 03:50:08 -- accel/accel.sh@20 -- # IFS=: 00:06:33.881 03:50:08 -- accel/accel.sh@20 -- # read -r var val 00:06:33.881 03:50:08 -- accel/accel.sh@21 -- # val=No 00:06:33.881 03:50:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.881 03:50:08 -- accel/accel.sh@20 -- # IFS=: 00:06:33.881 03:50:08 -- accel/accel.sh@20 -- # read -r var val 00:06:33.881 03:50:08 -- accel/accel.sh@21 -- # val= 00:06:33.881 03:50:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.881 03:50:08 -- accel/accel.sh@20 -- # IFS=: 00:06:33.881 03:50:08 -- accel/accel.sh@20 -- # read -r var val 00:06:33.881 03:50:08 -- accel/accel.sh@21 -- # val= 00:06:33.881 03:50:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.881 03:50:08 -- accel/accel.sh@20 -- # IFS=: 00:06:33.881 03:50:08 -- accel/accel.sh@20 -- # read -r var val 00:06:34.816 03:50:09 -- accel/accel.sh@21 -- # val= 00:06:34.816 03:50:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.816 03:50:09 -- accel/accel.sh@20 -- # IFS=: 00:06:34.816 03:50:09 -- accel/accel.sh@20 -- # read -r var val 00:06:34.816 03:50:09 -- accel/accel.sh@21 -- # val= 00:06:34.816 03:50:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.816 03:50:09 -- accel/accel.sh@20 -- # IFS=: 00:06:34.816 03:50:09 -- accel/accel.sh@20 -- # read -r var val 00:06:34.816 03:50:09 -- accel/accel.sh@21 -- # val= 00:06:34.816 03:50:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.816 03:50:09 -- accel/accel.sh@20 -- # IFS=: 00:06:34.816 03:50:09 -- accel/accel.sh@20 -- # read -r var val 00:06:34.816 03:50:09 -- accel/accel.sh@21 -- # val= 00:06:34.816 03:50:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.816 03:50:09 -- accel/accel.sh@20 -- # IFS=: 00:06:34.816 03:50:09 -- accel/accel.sh@20 -- # read -r var val 00:06:34.816 03:50:09 -- accel/accel.sh@21 -- # val= 00:06:34.816 03:50:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.816 03:50:09 -- accel/accel.sh@20 -- # IFS=: 00:06:34.816 03:50:09 -- accel/accel.sh@20 -- # read -r var val 00:06:34.816 03:50:09 -- accel/accel.sh@21 -- # val= 00:06:34.816 03:50:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.816 03:50:09 -- accel/accel.sh@20 -- # IFS=: 00:06:34.816 03:50:09 -- accel/accel.sh@20 -- # read -r var val 00:06:34.816 03:50:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:34.816 03:50:09 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:34.816 03:50:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.816 00:06:34.816 real 0m2.864s 00:06:34.816 user 0m2.456s 00:06:34.816 sys 0m0.208s 00:06:34.816 ************************************ 00:06:34.816 END TEST accel_dif_generate 00:06:34.816 ************************************ 00:06:34.816 03:50:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:34.816 03:50:09 -- common/autotest_common.sh@10 -- # set +x 00:06:35.075 03:50:09 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:35.075 03:50:09 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:35.075 03:50:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.075 03:50:09 -- common/autotest_common.sh@10 -- # set +x 00:06:35.075 ************************************ 00:06:35.075 START TEST accel_dif_generate_copy 00:06:35.075 ************************************ 00:06:35.075 03:50:09 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:06:35.075 03:50:09 -- accel/accel.sh@16 -- # local accel_opc 00:06:35.075 03:50:09 -- accel/accel.sh@17 -- # local accel_module 00:06:35.075 03:50:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:35.075 03:50:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:35.075 03:50:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.075 03:50:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.075 03:50:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.075 03:50:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.075 03:50:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.075 03:50:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.075 03:50:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.075 03:50:09 -- accel/accel.sh@42 -- # jq -r . 00:06:35.075 [2024-11-08 03:50:09.987018] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:35.075 [2024-11-08 03:50:09.987611] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59154 ] 00:06:35.075 [2024-11-08 03:50:10.131992] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.333 [2024-11-08 03:50:10.201876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.711 03:50:11 -- accel/accel.sh@18 -- # out=' 00:06:36.711 SPDK Configuration: 00:06:36.711 Core mask: 0x1 00:06:36.711 00:06:36.711 Accel Perf Configuration: 00:06:36.711 Workload Type: dif_generate_copy 00:06:36.711 Vector size: 4096 bytes 00:06:36.711 Transfer size: 4096 bytes 00:06:36.711 Vector count 1 00:06:36.711 Module: software 00:06:36.711 Queue depth: 32 00:06:36.711 Allocate depth: 32 00:06:36.711 # threads/core: 1 00:06:36.711 Run time: 1 seconds 00:06:36.711 Verify: No 00:06:36.711 00:06:36.711 Running for 1 seconds... 00:06:36.711 00:06:36.711 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:36.711 ------------------------------------------------------------------------------------ 00:06:36.711 0,0 116128/s 460 MiB/s 0 0 00:06:36.711 ==================================================================================== 00:06:36.711 Total 116128/s 453 MiB/s 0 0' 00:06:36.711 03:50:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:36.711 03:50:11 -- accel/accel.sh@20 -- # IFS=: 00:06:36.711 03:50:11 -- accel/accel.sh@20 -- # read -r var val 00:06:36.711 03:50:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:36.711 03:50:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.711 03:50:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.711 03:50:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.711 03:50:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.711 03:50:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.711 03:50:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.711 03:50:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.711 03:50:11 -- accel/accel.sh@42 -- # jq -r . 00:06:36.711 [2024-11-08 03:50:11.431668] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:36.711 [2024-11-08 03:50:11.431745] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59177 ] 00:06:36.711 [2024-11-08 03:50:11.561022] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.711 [2024-11-08 03:50:11.625856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.711 03:50:11 -- accel/accel.sh@21 -- # val= 00:06:36.711 03:50:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.711 03:50:11 -- accel/accel.sh@20 -- # IFS=: 00:06:36.711 03:50:11 -- accel/accel.sh@20 -- # read -r var val 00:06:36.711 03:50:11 -- accel/accel.sh@21 -- # val= 00:06:36.711 03:50:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.711 03:50:11 -- accel/accel.sh@20 -- # IFS=: 00:06:36.711 03:50:11 -- accel/accel.sh@20 -- # read -r var val 00:06:36.711 03:50:11 -- accel/accel.sh@21 -- # val=0x1 00:06:36.711 03:50:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.711 03:50:11 -- accel/accel.sh@20 -- # IFS=: 00:06:36.711 03:50:11 -- accel/accel.sh@20 -- # read -r var val 00:06:36.711 03:50:11 -- accel/accel.sh@21 -- # val= 00:06:36.711 03:50:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.711 03:50:11 -- accel/accel.sh@20 -- # IFS=: 00:06:36.711 03:50:11 -- accel/accel.sh@20 -- # read -r var val 00:06:36.711 03:50:11 -- accel/accel.sh@21 -- # val= 00:06:36.711 03:50:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.711 03:50:11 -- accel/accel.sh@20 -- # IFS=: 00:06:36.711 03:50:11 -- accel/accel.sh@20 -- # read -r var val 00:06:36.711 03:50:11 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:36.711 03:50:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.711 03:50:11 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:36.711 03:50:11 -- accel/accel.sh@20 -- # IFS=: 00:06:36.711 03:50:11 -- accel/accel.sh@20 -- # read -r var val 00:06:36.711 03:50:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:36.711 03:50:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.711 03:50:11 -- accel/accel.sh@20 -- # IFS=: 00:06:36.711 03:50:11 -- accel/accel.sh@20 -- # read -r var val 00:06:36.711 03:50:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:36.711 03:50:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.711 03:50:11 -- accel/accel.sh@20 -- # IFS=: 00:06:36.711 03:50:11 -- accel/accel.sh@20 -- # read -r var val 00:06:36.711 03:50:11 -- accel/accel.sh@21 -- # val= 00:06:36.711 03:50:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.711 03:50:11 -- accel/accel.sh@20 -- # IFS=: 00:06:36.711 03:50:11 -- accel/accel.sh@20 -- # read -r var val 00:06:36.711 03:50:11 -- accel/accel.sh@21 -- # val=software 00:06:36.711 03:50:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.711 03:50:11 -- accel/accel.sh@23 -- # accel_module=software 00:06:36.711 03:50:11 -- accel/accel.sh@20 -- # IFS=: 00:06:36.711 03:50:11 -- accel/accel.sh@20 -- # read -r var val 00:06:36.711 03:50:11 -- accel/accel.sh@21 -- # val=32 00:06:36.711 03:50:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.711 03:50:11 -- accel/accel.sh@20 -- # IFS=: 00:06:36.712 03:50:11 -- accel/accel.sh@20 -- # read -r var val 00:06:36.712 03:50:11 -- accel/accel.sh@21 -- # val=32 00:06:36.712 03:50:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.712 03:50:11 -- accel/accel.sh@20 -- # IFS=: 00:06:36.712 03:50:11 -- accel/accel.sh@20 -- # read -r var val 00:06:36.712 03:50:11 -- accel/accel.sh@21 -- # val=1 00:06:36.712 03:50:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.712 03:50:11 -- accel/accel.sh@20 -- # IFS=: 00:06:36.712 03:50:11 -- accel/accel.sh@20 -- # read -r var val 00:06:36.712 03:50:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:36.712 03:50:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.712 03:50:11 -- accel/accel.sh@20 -- # IFS=: 00:06:36.712 03:50:11 -- accel/accel.sh@20 -- # read -r var val 00:06:36.712 03:50:11 -- accel/accel.sh@21 -- # val=No 00:06:36.712 03:50:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.712 03:50:11 -- accel/accel.sh@20 -- # IFS=: 00:06:36.712 03:50:11 -- accel/accel.sh@20 -- # read -r var val 00:06:36.712 03:50:11 -- accel/accel.sh@21 -- # val= 00:06:36.712 03:50:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.712 03:50:11 -- accel/accel.sh@20 -- # IFS=: 00:06:36.712 03:50:11 -- accel/accel.sh@20 -- # read -r var val 00:06:36.712 03:50:11 -- accel/accel.sh@21 -- # val= 00:06:36.712 03:50:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.712 03:50:11 -- accel/accel.sh@20 -- # IFS=: 00:06:36.712 03:50:11 -- accel/accel.sh@20 -- # read -r var val 00:06:38.089 03:50:12 -- accel/accel.sh@21 -- # val= 00:06:38.089 03:50:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.089 03:50:12 -- accel/accel.sh@20 -- # IFS=: 00:06:38.089 03:50:12 -- accel/accel.sh@20 -- # read -r var val 00:06:38.089 03:50:12 -- accel/accel.sh@21 -- # val= 00:06:38.089 03:50:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.089 03:50:12 -- accel/accel.sh@20 -- # IFS=: 00:06:38.089 03:50:12 -- accel/accel.sh@20 -- # read -r var val 00:06:38.089 03:50:12 -- accel/accel.sh@21 -- # val= 00:06:38.089 03:50:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.089 03:50:12 -- accel/accel.sh@20 -- # IFS=: 00:06:38.089 03:50:12 -- accel/accel.sh@20 -- # read -r var val 00:06:38.089 03:50:12 -- accel/accel.sh@21 -- # val= 00:06:38.089 03:50:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.089 03:50:12 -- accel/accel.sh@20 -- # IFS=: 00:06:38.089 03:50:12 -- accel/accel.sh@20 -- # read -r var val 00:06:38.089 03:50:12 -- accel/accel.sh@21 -- # val= 00:06:38.089 03:50:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.089 03:50:12 -- accel/accel.sh@20 -- # IFS=: 00:06:38.089 03:50:12 -- accel/accel.sh@20 -- # read -r var val 00:06:38.089 03:50:12 -- accel/accel.sh@21 -- # val= 00:06:38.089 03:50:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.089 03:50:12 -- accel/accel.sh@20 -- # IFS=: 00:06:38.089 03:50:12 -- accel/accel.sh@20 -- # read -r var val 00:06:38.089 03:50:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:38.089 03:50:12 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:38.089 03:50:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.089 00:06:38.089 real 0m2.892s 00:06:38.089 user 0m2.471s 00:06:38.089 sys 0m0.218s 00:06:38.089 03:50:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:38.089 03:50:12 -- common/autotest_common.sh@10 -- # set +x 00:06:38.089 ************************************ 00:06:38.089 END TEST accel_dif_generate_copy 00:06:38.089 ************************************ 00:06:38.089 03:50:12 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:38.089 03:50:12 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:38.089 03:50:12 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:38.089 03:50:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.089 03:50:12 -- common/autotest_common.sh@10 -- # set +x 00:06:38.089 ************************************ 00:06:38.089 START TEST accel_comp 00:06:38.089 ************************************ 00:06:38.089 03:50:12 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:38.089 03:50:12 -- accel/accel.sh@16 -- # local accel_opc 00:06:38.089 03:50:12 -- accel/accel.sh@17 -- # local accel_module 00:06:38.089 03:50:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:38.089 03:50:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:38.089 03:50:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.089 03:50:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.089 03:50:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.089 03:50:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.089 03:50:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.089 03:50:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.089 03:50:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.089 03:50:12 -- accel/accel.sh@42 -- # jq -r . 00:06:38.089 [2024-11-08 03:50:12.930734] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:38.089 [2024-11-08 03:50:12.930838] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59206 ] 00:06:38.089 [2024-11-08 03:50:13.067192] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.089 [2024-11-08 03:50:13.136818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.467 03:50:14 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:39.467 00:06:39.467 SPDK Configuration: 00:06:39.467 Core mask: 0x1 00:06:39.467 00:06:39.467 Accel Perf Configuration: 00:06:39.467 Workload Type: compress 00:06:39.467 Transfer size: 4096 bytes 00:06:39.467 Vector count 1 00:06:39.467 Module: software 00:06:39.467 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:39.467 Queue depth: 32 00:06:39.467 Allocate depth: 32 00:06:39.467 # threads/core: 1 00:06:39.467 Run time: 1 seconds 00:06:39.467 Verify: No 00:06:39.467 00:06:39.467 Running for 1 seconds... 00:06:39.467 00:06:39.467 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:39.467 ------------------------------------------------------------------------------------ 00:06:39.467 0,0 58912/s 245 MiB/s 0 0 00:06:39.467 ==================================================================================== 00:06:39.467 Total 58912/s 230 MiB/s 0 0' 00:06:39.467 03:50:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:39.467 03:50:14 -- accel/accel.sh@20 -- # IFS=: 00:06:39.467 03:50:14 -- accel/accel.sh@20 -- # read -r var val 00:06:39.467 03:50:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:39.467 03:50:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.467 03:50:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:39.467 03:50:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.467 03:50:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.467 03:50:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:39.467 03:50:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:39.467 03:50:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:39.467 03:50:14 -- accel/accel.sh@42 -- # jq -r . 00:06:39.467 [2024-11-08 03:50:14.378661] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:39.467 [2024-11-08 03:50:14.378746] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59231 ] 00:06:39.467 [2024-11-08 03:50:14.507737] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.726 [2024-11-08 03:50:14.575187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.726 03:50:14 -- accel/accel.sh@21 -- # val= 00:06:39.726 03:50:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.726 03:50:14 -- accel/accel.sh@20 -- # IFS=: 00:06:39.726 03:50:14 -- accel/accel.sh@20 -- # read -r var val 00:06:39.726 03:50:14 -- accel/accel.sh@21 -- # val= 00:06:39.726 03:50:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.726 03:50:14 -- accel/accel.sh@20 -- # IFS=: 00:06:39.726 03:50:14 -- accel/accel.sh@20 -- # read -r var val 00:06:39.726 03:50:14 -- accel/accel.sh@21 -- # val= 00:06:39.726 03:50:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.726 03:50:14 -- accel/accel.sh@20 -- # IFS=: 00:06:39.726 03:50:14 -- accel/accel.sh@20 -- # read -r var val 00:06:39.726 03:50:14 -- accel/accel.sh@21 -- # val=0x1 00:06:39.726 03:50:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.726 03:50:14 -- accel/accel.sh@20 -- # IFS=: 00:06:39.726 03:50:14 -- accel/accel.sh@20 -- # read -r var val 00:06:39.726 03:50:14 -- accel/accel.sh@21 -- # val= 00:06:39.726 03:50:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.726 03:50:14 -- accel/accel.sh@20 -- # IFS=: 00:06:39.726 03:50:14 -- accel/accel.sh@20 -- # read -r var val 00:06:39.726 03:50:14 -- accel/accel.sh@21 -- # val= 00:06:39.726 03:50:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.726 03:50:14 -- accel/accel.sh@20 -- # IFS=: 00:06:39.726 03:50:14 -- accel/accel.sh@20 -- # read -r var val 00:06:39.726 03:50:14 -- accel/accel.sh@21 -- # val=compress 00:06:39.726 03:50:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.727 03:50:14 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:39.727 03:50:14 -- accel/accel.sh@20 -- # IFS=: 00:06:39.727 03:50:14 -- accel/accel.sh@20 -- # read -r var val 00:06:39.727 03:50:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:39.727 03:50:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.727 03:50:14 -- accel/accel.sh@20 -- # IFS=: 00:06:39.727 03:50:14 -- accel/accel.sh@20 -- # read -r var val 00:06:39.727 03:50:14 -- accel/accel.sh@21 -- # val= 00:06:39.727 03:50:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.727 03:50:14 -- accel/accel.sh@20 -- # IFS=: 00:06:39.727 03:50:14 -- accel/accel.sh@20 -- # read -r var val 00:06:39.727 03:50:14 -- accel/accel.sh@21 -- # val=software 00:06:39.727 03:50:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.727 03:50:14 -- accel/accel.sh@23 -- # accel_module=software 00:06:39.727 03:50:14 -- accel/accel.sh@20 -- # IFS=: 00:06:39.727 03:50:14 -- accel/accel.sh@20 -- # read -r var val 00:06:39.727 03:50:14 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:39.727 03:50:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.727 03:50:14 -- accel/accel.sh@20 -- # IFS=: 00:06:39.727 03:50:14 -- accel/accel.sh@20 -- # read -r var val 00:06:39.727 03:50:14 -- accel/accel.sh@21 -- # val=32 00:06:39.727 03:50:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.727 03:50:14 -- accel/accel.sh@20 -- # IFS=: 00:06:39.727 03:50:14 -- accel/accel.sh@20 -- # read -r var val 00:06:39.727 03:50:14 -- accel/accel.sh@21 -- # val=32 00:06:39.727 03:50:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.727 03:50:14 -- accel/accel.sh@20 -- # IFS=: 00:06:39.727 03:50:14 -- accel/accel.sh@20 -- # read -r var val 00:06:39.727 03:50:14 -- accel/accel.sh@21 -- # val=1 00:06:39.727 03:50:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.727 03:50:14 -- accel/accel.sh@20 -- # IFS=: 00:06:39.727 03:50:14 -- accel/accel.sh@20 -- # read -r var val 00:06:39.727 03:50:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:39.727 03:50:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.727 03:50:14 -- accel/accel.sh@20 -- # IFS=: 00:06:39.727 03:50:14 -- accel/accel.sh@20 -- # read -r var val 00:06:39.727 03:50:14 -- accel/accel.sh@21 -- # val=No 00:06:39.727 03:50:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.727 03:50:14 -- accel/accel.sh@20 -- # IFS=: 00:06:39.727 03:50:14 -- accel/accel.sh@20 -- # read -r var val 00:06:39.727 03:50:14 -- accel/accel.sh@21 -- # val= 00:06:39.727 03:50:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.727 03:50:14 -- accel/accel.sh@20 -- # IFS=: 00:06:39.727 03:50:14 -- accel/accel.sh@20 -- # read -r var val 00:06:39.727 03:50:14 -- accel/accel.sh@21 -- # val= 00:06:39.727 03:50:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.727 03:50:14 -- accel/accel.sh@20 -- # IFS=: 00:06:39.727 03:50:14 -- accel/accel.sh@20 -- # read -r var val 00:06:41.103 03:50:15 -- accel/accel.sh@21 -- # val= 00:06:41.103 03:50:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.103 03:50:15 -- accel/accel.sh@20 -- # IFS=: 00:06:41.103 03:50:15 -- accel/accel.sh@20 -- # read -r var val 00:06:41.103 03:50:15 -- accel/accel.sh@21 -- # val= 00:06:41.103 03:50:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.103 03:50:15 -- accel/accel.sh@20 -- # IFS=: 00:06:41.103 03:50:15 -- accel/accel.sh@20 -- # read -r var val 00:06:41.103 03:50:15 -- accel/accel.sh@21 -- # val= 00:06:41.103 03:50:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.103 03:50:15 -- accel/accel.sh@20 -- # IFS=: 00:06:41.103 03:50:15 -- accel/accel.sh@20 -- # read -r var val 00:06:41.103 03:50:15 -- accel/accel.sh@21 -- # val= 00:06:41.103 ************************************ 00:06:41.103 END TEST accel_comp 00:06:41.103 ************************************ 00:06:41.103 03:50:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.103 03:50:15 -- accel/accel.sh@20 -- # IFS=: 00:06:41.103 03:50:15 -- accel/accel.sh@20 -- # read -r var val 00:06:41.103 03:50:15 -- accel/accel.sh@21 -- # val= 00:06:41.103 03:50:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.103 03:50:15 -- accel/accel.sh@20 -- # IFS=: 00:06:41.103 03:50:15 -- accel/accel.sh@20 -- # read -r var val 00:06:41.103 03:50:15 -- accel/accel.sh@21 -- # val= 00:06:41.103 03:50:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.103 03:50:15 -- accel/accel.sh@20 -- # IFS=: 00:06:41.103 03:50:15 -- accel/accel.sh@20 -- # read -r var val 00:06:41.103 03:50:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:41.103 03:50:15 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:41.103 03:50:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.103 00:06:41.103 real 0m2.894s 00:06:41.103 user 0m2.483s 00:06:41.103 sys 0m0.209s 00:06:41.103 03:50:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:41.103 03:50:15 -- common/autotest_common.sh@10 -- # set +x 00:06:41.103 03:50:15 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:41.103 03:50:15 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:41.103 03:50:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:41.103 03:50:15 -- common/autotest_common.sh@10 -- # set +x 00:06:41.103 ************************************ 00:06:41.103 START TEST accel_decomp 00:06:41.103 ************************************ 00:06:41.103 03:50:15 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:41.103 03:50:15 -- accel/accel.sh@16 -- # local accel_opc 00:06:41.103 03:50:15 -- accel/accel.sh@17 -- # local accel_module 00:06:41.103 03:50:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:41.103 03:50:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:41.103 03:50:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.103 03:50:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:41.103 03:50:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.103 03:50:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.103 03:50:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:41.103 03:50:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:41.103 03:50:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:41.103 03:50:15 -- accel/accel.sh@42 -- # jq -r . 00:06:41.103 [2024-11-08 03:50:15.872966] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:41.103 [2024-11-08 03:50:15.873062] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59262 ] 00:06:41.103 [2024-11-08 03:50:16.010296] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.103 [2024-11-08 03:50:16.075983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.479 03:50:17 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:42.479 00:06:42.479 SPDK Configuration: 00:06:42.479 Core mask: 0x1 00:06:42.479 00:06:42.479 Accel Perf Configuration: 00:06:42.479 Workload Type: decompress 00:06:42.479 Transfer size: 4096 bytes 00:06:42.479 Vector count 1 00:06:42.479 Module: software 00:06:42.479 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:42.479 Queue depth: 32 00:06:42.479 Allocate depth: 32 00:06:42.479 # threads/core: 1 00:06:42.479 Run time: 1 seconds 00:06:42.479 Verify: Yes 00:06:42.479 00:06:42.479 Running for 1 seconds... 00:06:42.479 00:06:42.479 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:42.479 ------------------------------------------------------------------------------------ 00:06:42.479 0,0 84448/s 155 MiB/s 0 0 00:06:42.479 ==================================================================================== 00:06:42.479 Total 84448/s 329 MiB/s 0 0' 00:06:42.479 03:50:17 -- accel/accel.sh@20 -- # IFS=: 00:06:42.479 03:50:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:42.479 03:50:17 -- accel/accel.sh@20 -- # read -r var val 00:06:42.479 03:50:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:42.479 03:50:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.479 03:50:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.479 03:50:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.479 03:50:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.479 03:50:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.479 03:50:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.479 03:50:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.479 03:50:17 -- accel/accel.sh@42 -- # jq -r . 00:06:42.479 [2024-11-08 03:50:17.322687] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:42.479 [2024-11-08 03:50:17.322783] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59282 ] 00:06:42.479 [2024-11-08 03:50:17.458059] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.479 [2024-11-08 03:50:17.527613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.479 03:50:17 -- accel/accel.sh@21 -- # val= 00:06:42.479 03:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.479 03:50:17 -- accel/accel.sh@20 -- # IFS=: 00:06:42.479 03:50:17 -- accel/accel.sh@20 -- # read -r var val 00:06:42.479 03:50:17 -- accel/accel.sh@21 -- # val= 00:06:42.479 03:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.479 03:50:17 -- accel/accel.sh@20 -- # IFS=: 00:06:42.479 03:50:17 -- accel/accel.sh@20 -- # read -r var val 00:06:42.479 03:50:17 -- accel/accel.sh@21 -- # val= 00:06:42.479 03:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.479 03:50:17 -- accel/accel.sh@20 -- # IFS=: 00:06:42.479 03:50:17 -- accel/accel.sh@20 -- # read -r var val 00:06:42.479 03:50:17 -- accel/accel.sh@21 -- # val=0x1 00:06:42.479 03:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.479 03:50:17 -- accel/accel.sh@20 -- # IFS=: 00:06:42.479 03:50:17 -- accel/accel.sh@20 -- # read -r var val 00:06:42.479 03:50:17 -- accel/accel.sh@21 -- # val= 00:06:42.479 03:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.479 03:50:17 -- accel/accel.sh@20 -- # IFS=: 00:06:42.738 03:50:17 -- accel/accel.sh@20 -- # read -r var val 00:06:42.738 03:50:17 -- accel/accel.sh@21 -- # val= 00:06:42.738 03:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.738 03:50:17 -- accel/accel.sh@20 -- # IFS=: 00:06:42.738 03:50:17 -- accel/accel.sh@20 -- # read -r var val 00:06:42.738 03:50:17 -- accel/accel.sh@21 -- # val=decompress 00:06:42.738 03:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.738 03:50:17 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:42.738 03:50:17 -- accel/accel.sh@20 -- # IFS=: 00:06:42.738 03:50:17 -- accel/accel.sh@20 -- # read -r var val 00:06:42.738 03:50:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:42.738 03:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.738 03:50:17 -- accel/accel.sh@20 -- # IFS=: 00:06:42.738 03:50:17 -- accel/accel.sh@20 -- # read -r var val 00:06:42.738 03:50:17 -- accel/accel.sh@21 -- # val= 00:06:42.738 03:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.738 03:50:17 -- accel/accel.sh@20 -- # IFS=: 00:06:42.738 03:50:17 -- accel/accel.sh@20 -- # read -r var val 00:06:42.738 03:50:17 -- accel/accel.sh@21 -- # val=software 00:06:42.738 03:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.738 03:50:17 -- accel/accel.sh@23 -- # accel_module=software 00:06:42.738 03:50:17 -- accel/accel.sh@20 -- # IFS=: 00:06:42.738 03:50:17 -- accel/accel.sh@20 -- # read -r var val 00:06:42.738 03:50:17 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:42.738 03:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.738 03:50:17 -- accel/accel.sh@20 -- # IFS=: 00:06:42.738 03:50:17 -- accel/accel.sh@20 -- # read -r var val 00:06:42.738 03:50:17 -- accel/accel.sh@21 -- # val=32 00:06:42.738 03:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.738 03:50:17 -- accel/accel.sh@20 -- # IFS=: 00:06:42.738 03:50:17 -- accel/accel.sh@20 -- # read -r var val 00:06:42.738 03:50:17 -- accel/accel.sh@21 -- # val=32 00:06:42.738 03:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.738 03:50:17 -- accel/accel.sh@20 -- # IFS=: 00:06:42.738 03:50:17 -- accel/accel.sh@20 -- # read -r var val 00:06:42.738 03:50:17 -- accel/accel.sh@21 -- # val=1 00:06:42.738 03:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.738 03:50:17 -- accel/accel.sh@20 -- # IFS=: 00:06:42.738 03:50:17 -- accel/accel.sh@20 -- # read -r var val 00:06:42.738 03:50:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:42.738 03:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.738 03:50:17 -- accel/accel.sh@20 -- # IFS=: 00:06:42.738 03:50:17 -- accel/accel.sh@20 -- # read -r var val 00:06:42.738 03:50:17 -- accel/accel.sh@21 -- # val=Yes 00:06:42.738 03:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.738 03:50:17 -- accel/accel.sh@20 -- # IFS=: 00:06:42.738 03:50:17 -- accel/accel.sh@20 -- # read -r var val 00:06:42.738 03:50:17 -- accel/accel.sh@21 -- # val= 00:06:42.738 03:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.738 03:50:17 -- accel/accel.sh@20 -- # IFS=: 00:06:42.738 03:50:17 -- accel/accel.sh@20 -- # read -r var val 00:06:42.738 03:50:17 -- accel/accel.sh@21 -- # val= 00:06:42.738 03:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.738 03:50:17 -- accel/accel.sh@20 -- # IFS=: 00:06:42.738 03:50:17 -- accel/accel.sh@20 -- # read -r var val 00:06:43.675 03:50:18 -- accel/accel.sh@21 -- # val= 00:06:43.675 03:50:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.675 03:50:18 -- accel/accel.sh@20 -- # IFS=: 00:06:43.675 03:50:18 -- accel/accel.sh@20 -- # read -r var val 00:06:43.675 03:50:18 -- accel/accel.sh@21 -- # val= 00:06:43.675 03:50:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.675 03:50:18 -- accel/accel.sh@20 -- # IFS=: 00:06:43.675 03:50:18 -- accel/accel.sh@20 -- # read -r var val 00:06:43.675 03:50:18 -- accel/accel.sh@21 -- # val= 00:06:43.675 03:50:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.675 03:50:18 -- accel/accel.sh@20 -- # IFS=: 00:06:43.675 03:50:18 -- accel/accel.sh@20 -- # read -r var val 00:06:43.675 03:50:18 -- accel/accel.sh@21 -- # val= 00:06:43.675 03:50:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.675 03:50:18 -- accel/accel.sh@20 -- # IFS=: 00:06:43.675 03:50:18 -- accel/accel.sh@20 -- # read -r var val 00:06:43.675 03:50:18 -- accel/accel.sh@21 -- # val= 00:06:43.675 03:50:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.675 03:50:18 -- accel/accel.sh@20 -- # IFS=: 00:06:43.675 03:50:18 -- accel/accel.sh@20 -- # read -r var val 00:06:43.675 03:50:18 -- accel/accel.sh@21 -- # val= 00:06:43.675 ************************************ 00:06:43.675 END TEST accel_decomp 00:06:43.675 ************************************ 00:06:43.675 03:50:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.675 03:50:18 -- accel/accel.sh@20 -- # IFS=: 00:06:43.675 03:50:18 -- accel/accel.sh@20 -- # read -r var val 00:06:43.675 03:50:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:43.675 03:50:18 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:43.675 03:50:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.675 00:06:43.675 real 0m2.900s 00:06:43.675 user 0m2.494s 00:06:43.675 sys 0m0.209s 00:06:43.675 03:50:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:43.675 03:50:18 -- common/autotest_common.sh@10 -- # set +x 00:06:43.934 03:50:18 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:43.934 03:50:18 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:43.934 03:50:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.934 03:50:18 -- common/autotest_common.sh@10 -- # set +x 00:06:43.934 ************************************ 00:06:43.934 START TEST accel_decmop_full 00:06:43.934 ************************************ 00:06:43.934 03:50:18 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:43.934 03:50:18 -- accel/accel.sh@16 -- # local accel_opc 00:06:43.934 03:50:18 -- accel/accel.sh@17 -- # local accel_module 00:06:43.934 03:50:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:43.934 03:50:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:43.934 03:50:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.934 03:50:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.934 03:50:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.934 03:50:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.934 03:50:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.934 03:50:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.934 03:50:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.934 03:50:18 -- accel/accel.sh@42 -- # jq -r . 00:06:43.934 [2024-11-08 03:50:18.824002] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:43.934 [2024-11-08 03:50:18.824237] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59316 ] 00:06:43.935 [2024-11-08 03:50:18.961962] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.935 [2024-11-08 03:50:19.028029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.309 03:50:20 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:45.309 00:06:45.309 SPDK Configuration: 00:06:45.309 Core mask: 0x1 00:06:45.309 00:06:45.309 Accel Perf Configuration: 00:06:45.309 Workload Type: decompress 00:06:45.309 Transfer size: 111250 bytes 00:06:45.309 Vector count 1 00:06:45.309 Module: software 00:06:45.309 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:45.309 Queue depth: 32 00:06:45.309 Allocate depth: 32 00:06:45.309 # threads/core: 1 00:06:45.309 Run time: 1 seconds 00:06:45.309 Verify: Yes 00:06:45.309 00:06:45.309 Running for 1 seconds... 00:06:45.309 00:06:45.309 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:45.309 ------------------------------------------------------------------------------------ 00:06:45.309 0,0 5312/s 219 MiB/s 0 0 00:06:45.309 ==================================================================================== 00:06:45.309 Total 5312/s 563 MiB/s 0 0' 00:06:45.309 03:50:20 -- accel/accel.sh@20 -- # IFS=: 00:06:45.309 03:50:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:45.309 03:50:20 -- accel/accel.sh@20 -- # read -r var val 00:06:45.309 03:50:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.309 03:50:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:45.309 03:50:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.310 03:50:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.310 03:50:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.310 03:50:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.310 03:50:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.310 03:50:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.310 03:50:20 -- accel/accel.sh@42 -- # jq -r . 00:06:45.310 [2024-11-08 03:50:20.293783] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:45.310 [2024-11-08 03:50:20.294546] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59336 ] 00:06:45.568 [2024-11-08 03:50:20.433283] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.569 [2024-11-08 03:50:20.513011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.569 03:50:20 -- accel/accel.sh@21 -- # val= 00:06:45.569 03:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.569 03:50:20 -- accel/accel.sh@20 -- # IFS=: 00:06:45.569 03:50:20 -- accel/accel.sh@20 -- # read -r var val 00:06:45.569 03:50:20 -- accel/accel.sh@21 -- # val= 00:06:45.569 03:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.569 03:50:20 -- accel/accel.sh@20 -- # IFS=: 00:06:45.569 03:50:20 -- accel/accel.sh@20 -- # read -r var val 00:06:45.569 03:50:20 -- accel/accel.sh@21 -- # val= 00:06:45.569 03:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.569 03:50:20 -- accel/accel.sh@20 -- # IFS=: 00:06:45.569 03:50:20 -- accel/accel.sh@20 -- # read -r var val 00:06:45.569 03:50:20 -- accel/accel.sh@21 -- # val=0x1 00:06:45.569 03:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.569 03:50:20 -- accel/accel.sh@20 -- # IFS=: 00:06:45.569 03:50:20 -- accel/accel.sh@20 -- # read -r var val 00:06:45.569 03:50:20 -- accel/accel.sh@21 -- # val= 00:06:45.569 03:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.569 03:50:20 -- accel/accel.sh@20 -- # IFS=: 00:06:45.569 03:50:20 -- accel/accel.sh@20 -- # read -r var val 00:06:45.569 03:50:20 -- accel/accel.sh@21 -- # val= 00:06:45.569 03:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.569 03:50:20 -- accel/accel.sh@20 -- # IFS=: 00:06:45.569 03:50:20 -- accel/accel.sh@20 -- # read -r var val 00:06:45.569 03:50:20 -- accel/accel.sh@21 -- # val=decompress 00:06:45.569 03:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.569 03:50:20 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:45.569 03:50:20 -- accel/accel.sh@20 -- # IFS=: 00:06:45.569 03:50:20 -- accel/accel.sh@20 -- # read -r var val 00:06:45.569 03:50:20 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:45.569 03:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.569 03:50:20 -- accel/accel.sh@20 -- # IFS=: 00:06:45.569 03:50:20 -- accel/accel.sh@20 -- # read -r var val 00:06:45.569 03:50:20 -- accel/accel.sh@21 -- # val= 00:06:45.569 03:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.569 03:50:20 -- accel/accel.sh@20 -- # IFS=: 00:06:45.569 03:50:20 -- accel/accel.sh@20 -- # read -r var val 00:06:45.569 03:50:20 -- accel/accel.sh@21 -- # val=software 00:06:45.569 03:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.569 03:50:20 -- accel/accel.sh@23 -- # accel_module=software 00:06:45.569 03:50:20 -- accel/accel.sh@20 -- # IFS=: 00:06:45.569 03:50:20 -- accel/accel.sh@20 -- # read -r var val 00:06:45.569 03:50:20 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:45.569 03:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.569 03:50:20 -- accel/accel.sh@20 -- # IFS=: 00:06:45.569 03:50:20 -- accel/accel.sh@20 -- # read -r var val 00:06:45.569 03:50:20 -- accel/accel.sh@21 -- # val=32 00:06:45.569 03:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.569 03:50:20 -- accel/accel.sh@20 -- # IFS=: 00:06:45.569 03:50:20 -- accel/accel.sh@20 -- # read -r var val 00:06:45.569 03:50:20 -- accel/accel.sh@21 -- # val=32 00:06:45.569 03:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.569 03:50:20 -- accel/accel.sh@20 -- # IFS=: 00:06:45.569 03:50:20 -- accel/accel.sh@20 -- # read -r var val 00:06:45.569 03:50:20 -- accel/accel.sh@21 -- # val=1 00:06:45.569 03:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.569 03:50:20 -- accel/accel.sh@20 -- # IFS=: 00:06:45.569 03:50:20 -- accel/accel.sh@20 -- # read -r var val 00:06:45.569 03:50:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:45.569 03:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.569 03:50:20 -- accel/accel.sh@20 -- # IFS=: 00:06:45.569 03:50:20 -- accel/accel.sh@20 -- # read -r var val 00:06:45.569 03:50:20 -- accel/accel.sh@21 -- # val=Yes 00:06:45.569 03:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.569 03:50:20 -- accel/accel.sh@20 -- # IFS=: 00:06:45.569 03:50:20 -- accel/accel.sh@20 -- # read -r var val 00:06:45.569 03:50:20 -- accel/accel.sh@21 -- # val= 00:06:45.569 03:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.569 03:50:20 -- accel/accel.sh@20 -- # IFS=: 00:06:45.569 03:50:20 -- accel/accel.sh@20 -- # read -r var val 00:06:45.569 03:50:20 -- accel/accel.sh@21 -- # val= 00:06:45.569 03:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.569 03:50:20 -- accel/accel.sh@20 -- # IFS=: 00:06:45.569 03:50:20 -- accel/accel.sh@20 -- # read -r var val 00:06:46.946 03:50:21 -- accel/accel.sh@21 -- # val= 00:06:46.946 03:50:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.946 03:50:21 -- accel/accel.sh@20 -- # IFS=: 00:06:46.946 03:50:21 -- accel/accel.sh@20 -- # read -r var val 00:06:46.946 03:50:21 -- accel/accel.sh@21 -- # val= 00:06:46.946 03:50:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.946 03:50:21 -- accel/accel.sh@20 -- # IFS=: 00:06:46.946 03:50:21 -- accel/accel.sh@20 -- # read -r var val 00:06:46.946 03:50:21 -- accel/accel.sh@21 -- # val= 00:06:46.946 03:50:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.946 03:50:21 -- accel/accel.sh@20 -- # IFS=: 00:06:46.946 03:50:21 -- accel/accel.sh@20 -- # read -r var val 00:06:46.946 03:50:21 -- accel/accel.sh@21 -- # val= 00:06:46.946 03:50:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.946 03:50:21 -- accel/accel.sh@20 -- # IFS=: 00:06:46.946 03:50:21 -- accel/accel.sh@20 -- # read -r var val 00:06:46.946 03:50:21 -- accel/accel.sh@21 -- # val= 00:06:46.946 03:50:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.946 03:50:21 -- accel/accel.sh@20 -- # IFS=: 00:06:46.946 03:50:21 -- accel/accel.sh@20 -- # read -r var val 00:06:46.946 03:50:21 -- accel/accel.sh@21 -- # val= 00:06:46.946 03:50:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.946 03:50:21 -- accel/accel.sh@20 -- # IFS=: 00:06:46.946 03:50:21 -- accel/accel.sh@20 -- # read -r var val 00:06:46.946 03:50:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:46.946 03:50:21 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:46.946 03:50:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.946 00:06:46.946 real 0m2.973s 00:06:46.946 user 0m2.544s 00:06:46.946 sys 0m0.229s 00:06:46.946 03:50:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:46.946 ************************************ 00:06:46.946 END TEST accel_decmop_full 00:06:46.946 ************************************ 00:06:46.946 03:50:21 -- common/autotest_common.sh@10 -- # set +x 00:06:46.946 03:50:21 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:46.946 03:50:21 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:46.946 03:50:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:46.946 03:50:21 -- common/autotest_common.sh@10 -- # set +x 00:06:46.946 ************************************ 00:06:46.946 START TEST accel_decomp_mcore 00:06:46.946 ************************************ 00:06:46.946 03:50:21 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:46.946 03:50:21 -- accel/accel.sh@16 -- # local accel_opc 00:06:46.946 03:50:21 -- accel/accel.sh@17 -- # local accel_module 00:06:46.946 03:50:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:46.946 03:50:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:46.946 03:50:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.946 03:50:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.946 03:50:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.946 03:50:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.946 03:50:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.946 03:50:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.946 03:50:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.946 03:50:21 -- accel/accel.sh@42 -- # jq -r . 00:06:46.946 [2024-11-08 03:50:21.856483] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:46.946 [2024-11-08 03:50:21.856589] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59370 ] 00:06:46.946 [2024-11-08 03:50:21.995089] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:47.205 [2024-11-08 03:50:22.103615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.205 [2024-11-08 03:50:22.103754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.205 [2024-11-08 03:50:22.103883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:47.205 [2024-11-08 03:50:22.103884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.581 03:50:23 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:48.581 00:06:48.581 SPDK Configuration: 00:06:48.581 Core mask: 0xf 00:06:48.581 00:06:48.581 Accel Perf Configuration: 00:06:48.581 Workload Type: decompress 00:06:48.581 Transfer size: 4096 bytes 00:06:48.581 Vector count 1 00:06:48.581 Module: software 00:06:48.581 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:48.581 Queue depth: 32 00:06:48.581 Allocate depth: 32 00:06:48.581 # threads/core: 1 00:06:48.581 Run time: 1 seconds 00:06:48.581 Verify: Yes 00:06:48.581 00:06:48.581 Running for 1 seconds... 00:06:48.581 00:06:48.581 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:48.581 ------------------------------------------------------------------------------------ 00:06:48.581 0,0 65984/s 121 MiB/s 0 0 00:06:48.581 3,0 62144/s 114 MiB/s 0 0 00:06:48.581 2,0 65152/s 120 MiB/s 0 0 00:06:48.581 1,0 64992/s 119 MiB/s 0 0 00:06:48.581 ==================================================================================== 00:06:48.581 Total 258272/s 1008 MiB/s 0 0' 00:06:48.581 03:50:23 -- accel/accel.sh@20 -- # IFS=: 00:06:48.581 03:50:23 -- accel/accel.sh@20 -- # read -r var val 00:06:48.581 03:50:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:48.581 03:50:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:48.581 03:50:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.581 03:50:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.581 03:50:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.581 03:50:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.581 03:50:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.581 03:50:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.581 03:50:23 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.581 03:50:23 -- accel/accel.sh@42 -- # jq -r . 00:06:48.581 [2024-11-08 03:50:23.372575] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:48.581 [2024-11-08 03:50:23.372821] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59393 ] 00:06:48.581 [2024-11-08 03:50:23.509808] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:48.581 [2024-11-08 03:50:23.597086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.581 [2024-11-08 03:50:23.597205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.581 [2024-11-08 03:50:23.597331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:48.581 [2024-11-08 03:50:23.597331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.582 03:50:23 -- accel/accel.sh@21 -- # val= 00:06:48.582 03:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.582 03:50:23 -- accel/accel.sh@20 -- # IFS=: 00:06:48.582 03:50:23 -- accel/accel.sh@20 -- # read -r var val 00:06:48.582 03:50:23 -- accel/accel.sh@21 -- # val= 00:06:48.582 03:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.582 03:50:23 -- accel/accel.sh@20 -- # IFS=: 00:06:48.582 03:50:23 -- accel/accel.sh@20 -- # read -r var val 00:06:48.582 03:50:23 -- accel/accel.sh@21 -- # val= 00:06:48.582 03:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.582 03:50:23 -- accel/accel.sh@20 -- # IFS=: 00:06:48.582 03:50:23 -- accel/accel.sh@20 -- # read -r var val 00:06:48.582 03:50:23 -- accel/accel.sh@21 -- # val=0xf 00:06:48.582 03:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.582 03:50:23 -- accel/accel.sh@20 -- # IFS=: 00:06:48.582 03:50:23 -- accel/accel.sh@20 -- # read -r var val 00:06:48.582 03:50:23 -- accel/accel.sh@21 -- # val= 00:06:48.582 03:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.582 03:50:23 -- accel/accel.sh@20 -- # IFS=: 00:06:48.582 03:50:23 -- accel/accel.sh@20 -- # read -r var val 00:06:48.582 03:50:23 -- accel/accel.sh@21 -- # val= 00:06:48.582 03:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.582 03:50:23 -- accel/accel.sh@20 -- # IFS=: 00:06:48.582 03:50:23 -- accel/accel.sh@20 -- # read -r var val 00:06:48.582 03:50:23 -- accel/accel.sh@21 -- # val=decompress 00:06:48.582 03:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.582 03:50:23 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:48.582 03:50:23 -- accel/accel.sh@20 -- # IFS=: 00:06:48.582 03:50:23 -- accel/accel.sh@20 -- # read -r var val 00:06:48.582 03:50:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:48.582 03:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.582 03:50:23 -- accel/accel.sh@20 -- # IFS=: 00:06:48.582 03:50:23 -- accel/accel.sh@20 -- # read -r var val 00:06:48.582 03:50:23 -- accel/accel.sh@21 -- # val= 00:06:48.582 03:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.582 03:50:23 -- accel/accel.sh@20 -- # IFS=: 00:06:48.582 03:50:23 -- accel/accel.sh@20 -- # read -r var val 00:06:48.582 03:50:23 -- accel/accel.sh@21 -- # val=software 00:06:48.582 03:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.582 03:50:23 -- accel/accel.sh@23 -- # accel_module=software 00:06:48.582 03:50:23 -- accel/accel.sh@20 -- # IFS=: 00:06:48.582 03:50:23 -- accel/accel.sh@20 -- # read -r var val 00:06:48.582 03:50:23 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:48.582 03:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.582 03:50:23 -- accel/accel.sh@20 -- # IFS=: 00:06:48.582 03:50:23 -- accel/accel.sh@20 -- # read -r var val 00:06:48.582 03:50:23 -- accel/accel.sh@21 -- # val=32 00:06:48.582 03:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.582 03:50:23 -- accel/accel.sh@20 -- # IFS=: 00:06:48.582 03:50:23 -- accel/accel.sh@20 -- # read -r var val 00:06:48.582 03:50:23 -- accel/accel.sh@21 -- # val=32 00:06:48.582 03:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.582 03:50:23 -- accel/accel.sh@20 -- # IFS=: 00:06:48.582 03:50:23 -- accel/accel.sh@20 -- # read -r var val 00:06:48.582 03:50:23 -- accel/accel.sh@21 -- # val=1 00:06:48.582 03:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.582 03:50:23 -- accel/accel.sh@20 -- # IFS=: 00:06:48.582 03:50:23 -- accel/accel.sh@20 -- # read -r var val 00:06:48.582 03:50:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:48.582 03:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.582 03:50:23 -- accel/accel.sh@20 -- # IFS=: 00:06:48.582 03:50:23 -- accel/accel.sh@20 -- # read -r var val 00:06:48.582 03:50:23 -- accel/accel.sh@21 -- # val=Yes 00:06:48.582 03:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.582 03:50:23 -- accel/accel.sh@20 -- # IFS=: 00:06:48.582 03:50:23 -- accel/accel.sh@20 -- # read -r var val 00:06:48.582 03:50:23 -- accel/accel.sh@21 -- # val= 00:06:48.582 03:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.582 03:50:23 -- accel/accel.sh@20 -- # IFS=: 00:06:48.582 03:50:23 -- accel/accel.sh@20 -- # read -r var val 00:06:48.582 03:50:23 -- accel/accel.sh@21 -- # val= 00:06:48.582 03:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.582 03:50:23 -- accel/accel.sh@20 -- # IFS=: 00:06:48.582 03:50:23 -- accel/accel.sh@20 -- # read -r var val 00:06:49.958 03:50:24 -- accel/accel.sh@21 -- # val= 00:06:49.958 03:50:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.958 03:50:24 -- accel/accel.sh@20 -- # IFS=: 00:06:49.958 03:50:24 -- accel/accel.sh@20 -- # read -r var val 00:06:49.958 03:50:24 -- accel/accel.sh@21 -- # val= 00:06:49.958 03:50:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.958 03:50:24 -- accel/accel.sh@20 -- # IFS=: 00:06:49.958 03:50:24 -- accel/accel.sh@20 -- # read -r var val 00:06:49.958 03:50:24 -- accel/accel.sh@21 -- # val= 00:06:49.958 03:50:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.958 03:50:24 -- accel/accel.sh@20 -- # IFS=: 00:06:49.958 03:50:24 -- accel/accel.sh@20 -- # read -r var val 00:06:49.958 03:50:24 -- accel/accel.sh@21 -- # val= 00:06:49.958 03:50:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.958 03:50:24 -- accel/accel.sh@20 -- # IFS=: 00:06:49.958 03:50:24 -- accel/accel.sh@20 -- # read -r var val 00:06:49.958 03:50:24 -- accel/accel.sh@21 -- # val= 00:06:49.958 03:50:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.958 03:50:24 -- accel/accel.sh@20 -- # IFS=: 00:06:49.958 03:50:24 -- accel/accel.sh@20 -- # read -r var val 00:06:49.958 03:50:24 -- accel/accel.sh@21 -- # val= 00:06:49.958 03:50:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.958 03:50:24 -- accel/accel.sh@20 -- # IFS=: 00:06:49.958 03:50:24 -- accel/accel.sh@20 -- # read -r var val 00:06:49.958 03:50:24 -- accel/accel.sh@21 -- # val= 00:06:49.958 03:50:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.958 03:50:24 -- accel/accel.sh@20 -- # IFS=: 00:06:49.958 03:50:24 -- accel/accel.sh@20 -- # read -r var val 00:06:49.958 03:50:24 -- accel/accel.sh@21 -- # val= 00:06:49.958 03:50:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.958 03:50:24 -- accel/accel.sh@20 -- # IFS=: 00:06:49.958 03:50:24 -- accel/accel.sh@20 -- # read -r var val 00:06:49.958 03:50:24 -- accel/accel.sh@21 -- # val= 00:06:49.958 03:50:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.958 03:50:24 -- accel/accel.sh@20 -- # IFS=: 00:06:49.958 03:50:24 -- accel/accel.sh@20 -- # read -r var val 00:06:49.958 03:50:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:49.958 03:50:24 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:49.958 03:50:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.958 00:06:49.958 real 0m3.005s 00:06:49.958 user 0m9.334s 00:06:49.958 sys 0m0.259s 00:06:49.958 ************************************ 00:06:49.958 END TEST accel_decomp_mcore 00:06:49.958 ************************************ 00:06:49.958 03:50:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:49.958 03:50:24 -- common/autotest_common.sh@10 -- # set +x 00:06:49.958 03:50:24 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:49.958 03:50:24 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:49.958 03:50:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.958 03:50:24 -- common/autotest_common.sh@10 -- # set +x 00:06:49.958 ************************************ 00:06:49.958 START TEST accel_decomp_full_mcore 00:06:49.958 ************************************ 00:06:49.958 03:50:24 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:49.958 03:50:24 -- accel/accel.sh@16 -- # local accel_opc 00:06:49.958 03:50:24 -- accel/accel.sh@17 -- # local accel_module 00:06:49.958 03:50:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:49.958 03:50:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:49.958 03:50:24 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.958 03:50:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.958 03:50:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.958 03:50:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.958 03:50:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.958 03:50:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.958 03:50:24 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.958 03:50:24 -- accel/accel.sh@42 -- # jq -r . 00:06:49.959 [2024-11-08 03:50:24.910912] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:49.959 [2024-11-08 03:50:24.911011] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59432 ] 00:06:49.959 [2024-11-08 03:50:25.039762] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:50.217 [2024-11-08 03:50:25.112198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.217 [2024-11-08 03:50:25.112300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.217 [2024-11-08 03:50:25.112466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.217 [2024-11-08 03:50:25.112466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.635 03:50:26 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:51.635 00:06:51.635 SPDK Configuration: 00:06:51.635 Core mask: 0xf 00:06:51.635 00:06:51.635 Accel Perf Configuration: 00:06:51.635 Workload Type: decompress 00:06:51.635 Transfer size: 111250 bytes 00:06:51.635 Vector count 1 00:06:51.635 Module: software 00:06:51.635 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:51.635 Queue depth: 32 00:06:51.635 Allocate depth: 32 00:06:51.635 # threads/core: 1 00:06:51.635 Run time: 1 seconds 00:06:51.635 Verify: Yes 00:06:51.635 00:06:51.635 Running for 1 seconds... 00:06:51.635 00:06:51.635 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:51.635 ------------------------------------------------------------------------------------ 00:06:51.635 0,0 4992/s 206 MiB/s 0 0 00:06:51.635 3,0 4992/s 206 MiB/s 0 0 00:06:51.635 2,0 5056/s 208 MiB/s 0 0 00:06:51.635 1,0 5024/s 207 MiB/s 0 0 00:06:51.635 ==================================================================================== 00:06:51.635 Total 20064/s 2128 MiB/s 0 0' 00:06:51.635 03:50:26 -- accel/accel.sh@20 -- # IFS=: 00:06:51.635 03:50:26 -- accel/accel.sh@20 -- # read -r var val 00:06:51.635 03:50:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:51.635 03:50:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:51.635 03:50:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.635 03:50:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.635 03:50:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.635 03:50:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.635 03:50:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.635 03:50:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.635 03:50:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.635 03:50:26 -- accel/accel.sh@42 -- # jq -r . 00:06:51.635 [2024-11-08 03:50:26.377933] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:51.635 [2024-11-08 03:50:26.378190] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59455 ] 00:06:51.635 [2024-11-08 03:50:26.507839] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:51.635 [2024-11-08 03:50:26.577106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.635 [2024-11-08 03:50:26.577238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.635 [2024-11-08 03:50:26.577327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.635 [2024-11-08 03:50:26.577331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.635 03:50:26 -- accel/accel.sh@21 -- # val= 00:06:51.635 03:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.635 03:50:26 -- accel/accel.sh@20 -- # IFS=: 00:06:51.635 03:50:26 -- accel/accel.sh@20 -- # read -r var val 00:06:51.635 03:50:26 -- accel/accel.sh@21 -- # val= 00:06:51.635 03:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.635 03:50:26 -- accel/accel.sh@20 -- # IFS=: 00:06:51.635 03:50:26 -- accel/accel.sh@20 -- # read -r var val 00:06:51.635 03:50:26 -- accel/accel.sh@21 -- # val= 00:06:51.635 03:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.635 03:50:26 -- accel/accel.sh@20 -- # IFS=: 00:06:51.635 03:50:26 -- accel/accel.sh@20 -- # read -r var val 00:06:51.635 03:50:26 -- accel/accel.sh@21 -- # val=0xf 00:06:51.635 03:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.635 03:50:26 -- accel/accel.sh@20 -- # IFS=: 00:06:51.635 03:50:26 -- accel/accel.sh@20 -- # read -r var val 00:06:51.635 03:50:26 -- accel/accel.sh@21 -- # val= 00:06:51.635 03:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.635 03:50:26 -- accel/accel.sh@20 -- # IFS=: 00:06:51.635 03:50:26 -- accel/accel.sh@20 -- # read -r var val 00:06:51.635 03:50:26 -- accel/accel.sh@21 -- # val= 00:06:51.635 03:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.635 03:50:26 -- accel/accel.sh@20 -- # IFS=: 00:06:51.635 03:50:26 -- accel/accel.sh@20 -- # read -r var val 00:06:51.635 03:50:26 -- accel/accel.sh@21 -- # val=decompress 00:06:51.635 03:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.635 03:50:26 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:51.635 03:50:26 -- accel/accel.sh@20 -- # IFS=: 00:06:51.635 03:50:26 -- accel/accel.sh@20 -- # read -r var val 00:06:51.635 03:50:26 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:51.635 03:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.635 03:50:26 -- accel/accel.sh@20 -- # IFS=: 00:06:51.635 03:50:26 -- accel/accel.sh@20 -- # read -r var val 00:06:51.635 03:50:26 -- accel/accel.sh@21 -- # val= 00:06:51.635 03:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.635 03:50:26 -- accel/accel.sh@20 -- # IFS=: 00:06:51.635 03:50:26 -- accel/accel.sh@20 -- # read -r var val 00:06:51.635 03:50:26 -- accel/accel.sh@21 -- # val=software 00:06:51.635 03:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.635 03:50:26 -- accel/accel.sh@23 -- # accel_module=software 00:06:51.635 03:50:26 -- accel/accel.sh@20 -- # IFS=: 00:06:51.635 03:50:26 -- accel/accel.sh@20 -- # read -r var val 00:06:51.635 03:50:26 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:51.635 03:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.635 03:50:26 -- accel/accel.sh@20 -- # IFS=: 00:06:51.635 03:50:26 -- accel/accel.sh@20 -- # read -r var val 00:06:51.635 03:50:26 -- accel/accel.sh@21 -- # val=32 00:06:51.635 03:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.635 03:50:26 -- accel/accel.sh@20 -- # IFS=: 00:06:51.635 03:50:26 -- accel/accel.sh@20 -- # read -r var val 00:06:51.635 03:50:26 -- accel/accel.sh@21 -- # val=32 00:06:51.635 03:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.635 03:50:26 -- accel/accel.sh@20 -- # IFS=: 00:06:51.635 03:50:26 -- accel/accel.sh@20 -- # read -r var val 00:06:51.635 03:50:26 -- accel/accel.sh@21 -- # val=1 00:06:51.635 03:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.635 03:50:26 -- accel/accel.sh@20 -- # IFS=: 00:06:51.635 03:50:26 -- accel/accel.sh@20 -- # read -r var val 00:06:51.635 03:50:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:51.635 03:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.635 03:50:26 -- accel/accel.sh@20 -- # IFS=: 00:06:51.635 03:50:26 -- accel/accel.sh@20 -- # read -r var val 00:06:51.635 03:50:26 -- accel/accel.sh@21 -- # val=Yes 00:06:51.635 03:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.635 03:50:26 -- accel/accel.sh@20 -- # IFS=: 00:06:51.635 03:50:26 -- accel/accel.sh@20 -- # read -r var val 00:06:51.635 03:50:26 -- accel/accel.sh@21 -- # val= 00:06:51.635 03:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.635 03:50:26 -- accel/accel.sh@20 -- # IFS=: 00:06:51.635 03:50:26 -- accel/accel.sh@20 -- # read -r var val 00:06:51.635 03:50:26 -- accel/accel.sh@21 -- # val= 00:06:51.635 03:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.635 03:50:26 -- accel/accel.sh@20 -- # IFS=: 00:06:51.635 03:50:26 -- accel/accel.sh@20 -- # read -r var val 00:06:53.013 03:50:27 -- accel/accel.sh@21 -- # val= 00:06:53.013 03:50:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.013 03:50:27 -- accel/accel.sh@20 -- # IFS=: 00:06:53.013 03:50:27 -- accel/accel.sh@20 -- # read -r var val 00:06:53.013 03:50:27 -- accel/accel.sh@21 -- # val= 00:06:53.013 03:50:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.013 03:50:27 -- accel/accel.sh@20 -- # IFS=: 00:06:53.013 03:50:27 -- accel/accel.sh@20 -- # read -r var val 00:06:53.013 03:50:27 -- accel/accel.sh@21 -- # val= 00:06:53.013 03:50:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.013 03:50:27 -- accel/accel.sh@20 -- # IFS=: 00:06:53.013 03:50:27 -- accel/accel.sh@20 -- # read -r var val 00:06:53.013 03:50:27 -- accel/accel.sh@21 -- # val= 00:06:53.013 03:50:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.013 03:50:27 -- accel/accel.sh@20 -- # IFS=: 00:06:53.013 03:50:27 -- accel/accel.sh@20 -- # read -r var val 00:06:53.013 03:50:27 -- accel/accel.sh@21 -- # val= 00:06:53.013 03:50:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.013 03:50:27 -- accel/accel.sh@20 -- # IFS=: 00:06:53.013 03:50:27 -- accel/accel.sh@20 -- # read -r var val 00:06:53.013 03:50:27 -- accel/accel.sh@21 -- # val= 00:06:53.013 03:50:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.013 03:50:27 -- accel/accel.sh@20 -- # IFS=: 00:06:53.013 03:50:27 -- accel/accel.sh@20 -- # read -r var val 00:06:53.013 03:50:27 -- accel/accel.sh@21 -- # val= 00:06:53.013 03:50:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.013 03:50:27 -- accel/accel.sh@20 -- # IFS=: 00:06:53.013 03:50:27 -- accel/accel.sh@20 -- # read -r var val 00:06:53.013 03:50:27 -- accel/accel.sh@21 -- # val= 00:06:53.013 03:50:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.013 03:50:27 -- accel/accel.sh@20 -- # IFS=: 00:06:53.013 03:50:27 -- accel/accel.sh@20 -- # read -r var val 00:06:53.013 ************************************ 00:06:53.013 END TEST accel_decomp_full_mcore 00:06:53.013 ************************************ 00:06:53.013 03:50:27 -- accel/accel.sh@21 -- # val= 00:06:53.013 03:50:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.013 03:50:27 -- accel/accel.sh@20 -- # IFS=: 00:06:53.013 03:50:27 -- accel/accel.sh@20 -- # read -r var val 00:06:53.013 03:50:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:53.013 03:50:27 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:53.013 03:50:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.013 00:06:53.013 real 0m2.935s 00:06:53.013 user 0m9.346s 00:06:53.013 sys 0m0.234s 00:06:53.013 03:50:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:53.013 03:50:27 -- common/autotest_common.sh@10 -- # set +x 00:06:53.013 03:50:27 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:53.013 03:50:27 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:53.013 03:50:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:53.013 03:50:27 -- common/autotest_common.sh@10 -- # set +x 00:06:53.013 ************************************ 00:06:53.013 START TEST accel_decomp_mthread 00:06:53.013 ************************************ 00:06:53.013 03:50:27 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:53.013 03:50:27 -- accel/accel.sh@16 -- # local accel_opc 00:06:53.013 03:50:27 -- accel/accel.sh@17 -- # local accel_module 00:06:53.013 03:50:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:53.013 03:50:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:53.013 03:50:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.013 03:50:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.013 03:50:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.013 03:50:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.013 03:50:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.013 03:50:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.013 03:50:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.013 03:50:27 -- accel/accel.sh@42 -- # jq -r . 00:06:53.013 [2024-11-08 03:50:27.891338] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:53.013 [2024-11-08 03:50:27.891440] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59487 ] 00:06:53.013 [2024-11-08 03:50:28.022847] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.013 [2024-11-08 03:50:28.089302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.390 03:50:29 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:54.390 00:06:54.390 SPDK Configuration: 00:06:54.390 Core mask: 0x1 00:06:54.390 00:06:54.390 Accel Perf Configuration: 00:06:54.390 Workload Type: decompress 00:06:54.390 Transfer size: 4096 bytes 00:06:54.390 Vector count 1 00:06:54.390 Module: software 00:06:54.390 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:54.390 Queue depth: 32 00:06:54.390 Allocate depth: 32 00:06:54.390 # threads/core: 2 00:06:54.390 Run time: 1 seconds 00:06:54.390 Verify: Yes 00:06:54.390 00:06:54.390 Running for 1 seconds... 00:06:54.390 00:06:54.390 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:54.390 ------------------------------------------------------------------------------------ 00:06:54.390 0,1 42784/s 78 MiB/s 0 0 00:06:54.390 0,0 42656/s 78 MiB/s 0 0 00:06:54.390 ==================================================================================== 00:06:54.390 Total 85440/s 333 MiB/s 0 0' 00:06:54.390 03:50:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:54.390 03:50:29 -- accel/accel.sh@20 -- # IFS=: 00:06:54.390 03:50:29 -- accel/accel.sh@20 -- # read -r var val 00:06:54.390 03:50:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:54.390 03:50:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.390 03:50:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.390 03:50:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.390 03:50:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.390 03:50:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.390 03:50:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.390 03:50:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.390 03:50:29 -- accel/accel.sh@42 -- # jq -r . 00:06:54.390 [2024-11-08 03:50:29.325239] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:54.390 [2024-11-08 03:50:29.325511] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59506 ] 00:06:54.390 [2024-11-08 03:50:29.448478] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.649 [2024-11-08 03:50:29.516454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.649 03:50:29 -- accel/accel.sh@21 -- # val= 00:06:54.649 03:50:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.649 03:50:29 -- accel/accel.sh@20 -- # IFS=: 00:06:54.649 03:50:29 -- accel/accel.sh@20 -- # read -r var val 00:06:54.649 03:50:29 -- accel/accel.sh@21 -- # val= 00:06:54.649 03:50:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.649 03:50:29 -- accel/accel.sh@20 -- # IFS=: 00:06:54.649 03:50:29 -- accel/accel.sh@20 -- # read -r var val 00:06:54.649 03:50:29 -- accel/accel.sh@21 -- # val= 00:06:54.649 03:50:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.649 03:50:29 -- accel/accel.sh@20 -- # IFS=: 00:06:54.649 03:50:29 -- accel/accel.sh@20 -- # read -r var val 00:06:54.649 03:50:29 -- accel/accel.sh@21 -- # val=0x1 00:06:54.649 03:50:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.649 03:50:29 -- accel/accel.sh@20 -- # IFS=: 00:06:54.649 03:50:29 -- accel/accel.sh@20 -- # read -r var val 00:06:54.649 03:50:29 -- accel/accel.sh@21 -- # val= 00:06:54.649 03:50:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.649 03:50:29 -- accel/accel.sh@20 -- # IFS=: 00:06:54.649 03:50:29 -- accel/accel.sh@20 -- # read -r var val 00:06:54.649 03:50:29 -- accel/accel.sh@21 -- # val= 00:06:54.649 03:50:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.649 03:50:29 -- accel/accel.sh@20 -- # IFS=: 00:06:54.649 03:50:29 -- accel/accel.sh@20 -- # read -r var val 00:06:54.649 03:50:29 -- accel/accel.sh@21 -- # val=decompress 00:06:54.649 03:50:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.649 03:50:29 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:54.649 03:50:29 -- accel/accel.sh@20 -- # IFS=: 00:06:54.649 03:50:29 -- accel/accel.sh@20 -- # read -r var val 00:06:54.649 03:50:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:54.649 03:50:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.649 03:50:29 -- accel/accel.sh@20 -- # IFS=: 00:06:54.649 03:50:29 -- accel/accel.sh@20 -- # read -r var val 00:06:54.649 03:50:29 -- accel/accel.sh@21 -- # val= 00:06:54.649 03:50:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.649 03:50:29 -- accel/accel.sh@20 -- # IFS=: 00:06:54.649 03:50:29 -- accel/accel.sh@20 -- # read -r var val 00:06:54.649 03:50:29 -- accel/accel.sh@21 -- # val=software 00:06:54.649 03:50:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.649 03:50:29 -- accel/accel.sh@23 -- # accel_module=software 00:06:54.649 03:50:29 -- accel/accel.sh@20 -- # IFS=: 00:06:54.649 03:50:29 -- accel/accel.sh@20 -- # read -r var val 00:06:54.649 03:50:29 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:54.649 03:50:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.649 03:50:29 -- accel/accel.sh@20 -- # IFS=: 00:06:54.649 03:50:29 -- accel/accel.sh@20 -- # read -r var val 00:06:54.649 03:50:29 -- accel/accel.sh@21 -- # val=32 00:06:54.649 03:50:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.649 03:50:29 -- accel/accel.sh@20 -- # IFS=: 00:06:54.649 03:50:29 -- accel/accel.sh@20 -- # read -r var val 00:06:54.649 03:50:29 -- accel/accel.sh@21 -- # val=32 00:06:54.649 03:50:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.649 03:50:29 -- accel/accel.sh@20 -- # IFS=: 00:06:54.649 03:50:29 -- accel/accel.sh@20 -- # read -r var val 00:06:54.649 03:50:29 -- accel/accel.sh@21 -- # val=2 00:06:54.649 03:50:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.649 03:50:29 -- accel/accel.sh@20 -- # IFS=: 00:06:54.649 03:50:29 -- accel/accel.sh@20 -- # read -r var val 00:06:54.649 03:50:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:54.649 03:50:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.649 03:50:29 -- accel/accel.sh@20 -- # IFS=: 00:06:54.649 03:50:29 -- accel/accel.sh@20 -- # read -r var val 00:06:54.649 03:50:29 -- accel/accel.sh@21 -- # val=Yes 00:06:54.650 03:50:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.650 03:50:29 -- accel/accel.sh@20 -- # IFS=: 00:06:54.650 03:50:29 -- accel/accel.sh@20 -- # read -r var val 00:06:54.650 03:50:29 -- accel/accel.sh@21 -- # val= 00:06:54.650 03:50:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.650 03:50:29 -- accel/accel.sh@20 -- # IFS=: 00:06:54.650 03:50:29 -- accel/accel.sh@20 -- # read -r var val 00:06:54.650 03:50:29 -- accel/accel.sh@21 -- # val= 00:06:54.650 03:50:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.650 03:50:29 -- accel/accel.sh@20 -- # IFS=: 00:06:54.650 03:50:29 -- accel/accel.sh@20 -- # read -r var val 00:06:56.024 03:50:30 -- accel/accel.sh@21 -- # val= 00:06:56.024 03:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.024 03:50:30 -- accel/accel.sh@20 -- # IFS=: 00:06:56.024 03:50:30 -- accel/accel.sh@20 -- # read -r var val 00:06:56.024 03:50:30 -- accel/accel.sh@21 -- # val= 00:06:56.024 03:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.024 03:50:30 -- accel/accel.sh@20 -- # IFS=: 00:06:56.024 03:50:30 -- accel/accel.sh@20 -- # read -r var val 00:06:56.024 03:50:30 -- accel/accel.sh@21 -- # val= 00:06:56.024 03:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.024 03:50:30 -- accel/accel.sh@20 -- # IFS=: 00:06:56.024 03:50:30 -- accel/accel.sh@20 -- # read -r var val 00:06:56.024 03:50:30 -- accel/accel.sh@21 -- # val= 00:06:56.024 03:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.024 03:50:30 -- accel/accel.sh@20 -- # IFS=: 00:06:56.024 03:50:30 -- accel/accel.sh@20 -- # read -r var val 00:06:56.024 03:50:30 -- accel/accel.sh@21 -- # val= 00:06:56.024 03:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.024 03:50:30 -- accel/accel.sh@20 -- # IFS=: 00:06:56.024 03:50:30 -- accel/accel.sh@20 -- # read -r var val 00:06:56.024 03:50:30 -- accel/accel.sh@21 -- # val= 00:06:56.024 03:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.024 03:50:30 -- accel/accel.sh@20 -- # IFS=: 00:06:56.024 03:50:30 -- accel/accel.sh@20 -- # read -r var val 00:06:56.024 03:50:30 -- accel/accel.sh@21 -- # val= 00:06:56.024 03:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.024 03:50:30 -- accel/accel.sh@20 -- # IFS=: 00:06:56.025 03:50:30 -- accel/accel.sh@20 -- # read -r var val 00:06:56.025 03:50:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:56.025 03:50:30 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:56.025 03:50:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.025 00:06:56.025 real 0m2.909s 00:06:56.025 user 0m2.498s 00:06:56.025 sys 0m0.213s 00:06:56.025 03:50:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:56.025 03:50:30 -- common/autotest_common.sh@10 -- # set +x 00:06:56.025 ************************************ 00:06:56.025 END TEST accel_decomp_mthread 00:06:56.025 ************************************ 00:06:56.025 03:50:30 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:56.025 03:50:30 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:56.025 03:50:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:56.025 03:50:30 -- common/autotest_common.sh@10 -- # set +x 00:06:56.025 ************************************ 00:06:56.025 START TEST accel_deomp_full_mthread 00:06:56.025 ************************************ 00:06:56.025 03:50:30 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:56.025 03:50:30 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.025 03:50:30 -- accel/accel.sh@17 -- # local accel_module 00:06:56.025 03:50:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:56.025 03:50:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:56.025 03:50:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.025 03:50:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.025 03:50:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.025 03:50:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.025 03:50:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.025 03:50:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.025 03:50:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.025 03:50:30 -- accel/accel.sh@42 -- # jq -r . 00:06:56.025 [2024-11-08 03:50:30.859999] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:56.025 [2024-11-08 03:50:30.860094] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59543 ] 00:06:56.025 [2024-11-08 03:50:30.997945] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.025 [2024-11-08 03:50:31.067143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.400 03:50:32 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:57.400 00:06:57.400 SPDK Configuration: 00:06:57.400 Core mask: 0x1 00:06:57.400 00:06:57.400 Accel Perf Configuration: 00:06:57.400 Workload Type: decompress 00:06:57.400 Transfer size: 111250 bytes 00:06:57.400 Vector count 1 00:06:57.400 Module: software 00:06:57.400 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:57.400 Queue depth: 32 00:06:57.400 Allocate depth: 32 00:06:57.400 # threads/core: 2 00:06:57.400 Run time: 1 seconds 00:06:57.400 Verify: Yes 00:06:57.400 00:06:57.400 Running for 1 seconds... 00:06:57.400 00:06:57.400 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:57.400 ------------------------------------------------------------------------------------ 00:06:57.400 0,1 2688/s 111 MiB/s 0 0 00:06:57.400 0,0 2688/s 111 MiB/s 0 0 00:06:57.400 ==================================================================================== 00:06:57.400 Total 5376/s 570 MiB/s 0 0' 00:06:57.400 03:50:32 -- accel/accel.sh@20 -- # IFS=: 00:06:57.400 03:50:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:57.400 03:50:32 -- accel/accel.sh@20 -- # read -r var val 00:06:57.400 03:50:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:57.400 03:50:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.400 03:50:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.400 03:50:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.400 03:50:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.400 03:50:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.400 03:50:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.400 03:50:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.400 03:50:32 -- accel/accel.sh@42 -- # jq -r . 00:06:57.400 [2024-11-08 03:50:32.366595] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:57.400 [2024-11-08 03:50:32.366696] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59562 ] 00:06:57.400 [2024-11-08 03:50:32.502054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.659 [2024-11-08 03:50:32.581768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.659 03:50:32 -- accel/accel.sh@21 -- # val= 00:06:57.659 03:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.659 03:50:32 -- accel/accel.sh@20 -- # IFS=: 00:06:57.659 03:50:32 -- accel/accel.sh@20 -- # read -r var val 00:06:57.659 03:50:32 -- accel/accel.sh@21 -- # val= 00:06:57.659 03:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.659 03:50:32 -- accel/accel.sh@20 -- # IFS=: 00:06:57.659 03:50:32 -- accel/accel.sh@20 -- # read -r var val 00:06:57.659 03:50:32 -- accel/accel.sh@21 -- # val= 00:06:57.659 03:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.659 03:50:32 -- accel/accel.sh@20 -- # IFS=: 00:06:57.659 03:50:32 -- accel/accel.sh@20 -- # read -r var val 00:06:57.659 03:50:32 -- accel/accel.sh@21 -- # val=0x1 00:06:57.659 03:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.659 03:50:32 -- accel/accel.sh@20 -- # IFS=: 00:06:57.659 03:50:32 -- accel/accel.sh@20 -- # read -r var val 00:06:57.659 03:50:32 -- accel/accel.sh@21 -- # val= 00:06:57.659 03:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.659 03:50:32 -- accel/accel.sh@20 -- # IFS=: 00:06:57.659 03:50:32 -- accel/accel.sh@20 -- # read -r var val 00:06:57.659 03:50:32 -- accel/accel.sh@21 -- # val= 00:06:57.659 03:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.659 03:50:32 -- accel/accel.sh@20 -- # IFS=: 00:06:57.659 03:50:32 -- accel/accel.sh@20 -- # read -r var val 00:06:57.659 03:50:32 -- accel/accel.sh@21 -- # val=decompress 00:06:57.659 03:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.659 03:50:32 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:57.659 03:50:32 -- accel/accel.sh@20 -- # IFS=: 00:06:57.659 03:50:32 -- accel/accel.sh@20 -- # read -r var val 00:06:57.659 03:50:32 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:57.659 03:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.659 03:50:32 -- accel/accel.sh@20 -- # IFS=: 00:06:57.659 03:50:32 -- accel/accel.sh@20 -- # read -r var val 00:06:57.659 03:50:32 -- accel/accel.sh@21 -- # val= 00:06:57.659 03:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.659 03:50:32 -- accel/accel.sh@20 -- # IFS=: 00:06:57.659 03:50:32 -- accel/accel.sh@20 -- # read -r var val 00:06:57.659 03:50:32 -- accel/accel.sh@21 -- # val=software 00:06:57.659 03:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.659 03:50:32 -- accel/accel.sh@23 -- # accel_module=software 00:06:57.659 03:50:32 -- accel/accel.sh@20 -- # IFS=: 00:06:57.659 03:50:32 -- accel/accel.sh@20 -- # read -r var val 00:06:57.659 03:50:32 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:57.659 03:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.659 03:50:32 -- accel/accel.sh@20 -- # IFS=: 00:06:57.659 03:50:32 -- accel/accel.sh@20 -- # read -r var val 00:06:57.659 03:50:32 -- accel/accel.sh@21 -- # val=32 00:06:57.659 03:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.659 03:50:32 -- accel/accel.sh@20 -- # IFS=: 00:06:57.659 03:50:32 -- accel/accel.sh@20 -- # read -r var val 00:06:57.659 03:50:32 -- accel/accel.sh@21 -- # val=32 00:06:57.659 03:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.659 03:50:32 -- accel/accel.sh@20 -- # IFS=: 00:06:57.659 03:50:32 -- accel/accel.sh@20 -- # read -r var val 00:06:57.659 03:50:32 -- accel/accel.sh@21 -- # val=2 00:06:57.659 03:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.659 03:50:32 -- accel/accel.sh@20 -- # IFS=: 00:06:57.659 03:50:32 -- accel/accel.sh@20 -- # read -r var val 00:06:57.659 03:50:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:57.659 03:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.659 03:50:32 -- accel/accel.sh@20 -- # IFS=: 00:06:57.659 03:50:32 -- accel/accel.sh@20 -- # read -r var val 00:06:57.659 03:50:32 -- accel/accel.sh@21 -- # val=Yes 00:06:57.659 03:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.659 03:50:32 -- accel/accel.sh@20 -- # IFS=: 00:06:57.659 03:50:32 -- accel/accel.sh@20 -- # read -r var val 00:06:57.659 03:50:32 -- accel/accel.sh@21 -- # val= 00:06:57.659 03:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.659 03:50:32 -- accel/accel.sh@20 -- # IFS=: 00:06:57.659 03:50:32 -- accel/accel.sh@20 -- # read -r var val 00:06:57.659 03:50:32 -- accel/accel.sh@21 -- # val= 00:06:57.659 03:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.659 03:50:32 -- accel/accel.sh@20 -- # IFS=: 00:06:57.659 03:50:32 -- accel/accel.sh@20 -- # read -r var val 00:06:59.035 03:50:33 -- accel/accel.sh@21 -- # val= 00:06:59.035 03:50:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.035 03:50:33 -- accel/accel.sh@20 -- # IFS=: 00:06:59.035 03:50:33 -- accel/accel.sh@20 -- # read -r var val 00:06:59.035 03:50:33 -- accel/accel.sh@21 -- # val= 00:06:59.035 03:50:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.035 03:50:33 -- accel/accel.sh@20 -- # IFS=: 00:06:59.035 03:50:33 -- accel/accel.sh@20 -- # read -r var val 00:06:59.035 03:50:33 -- accel/accel.sh@21 -- # val= 00:06:59.035 03:50:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.035 03:50:33 -- accel/accel.sh@20 -- # IFS=: 00:06:59.035 03:50:33 -- accel/accel.sh@20 -- # read -r var val 00:06:59.035 03:50:33 -- accel/accel.sh@21 -- # val= 00:06:59.035 03:50:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.035 03:50:33 -- accel/accel.sh@20 -- # IFS=: 00:06:59.035 03:50:33 -- accel/accel.sh@20 -- # read -r var val 00:06:59.035 03:50:33 -- accel/accel.sh@21 -- # val= 00:06:59.035 03:50:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.035 03:50:33 -- accel/accel.sh@20 -- # IFS=: 00:06:59.035 03:50:33 -- accel/accel.sh@20 -- # read -r var val 00:06:59.035 03:50:33 -- accel/accel.sh@21 -- # val= 00:06:59.035 03:50:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.035 03:50:33 -- accel/accel.sh@20 -- # IFS=: 00:06:59.035 03:50:33 -- accel/accel.sh@20 -- # read -r var val 00:06:59.035 03:50:33 -- accel/accel.sh@21 -- # val= 00:06:59.035 03:50:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.035 03:50:33 -- accel/accel.sh@20 -- # IFS=: 00:06:59.035 03:50:33 -- accel/accel.sh@20 -- # read -r var val 00:06:59.035 03:50:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:59.035 03:50:33 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:59.035 03:50:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.035 00:06:59.035 real 0m3.018s 00:06:59.035 user 0m2.591s 00:06:59.035 sys 0m0.226s 00:06:59.035 03:50:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:59.035 03:50:33 -- common/autotest_common.sh@10 -- # set +x 00:06:59.035 ************************************ 00:06:59.035 END TEST accel_deomp_full_mthread 00:06:59.035 ************************************ 00:06:59.035 03:50:33 -- accel/accel.sh@116 -- # [[ n == y ]] 00:06:59.035 03:50:33 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:59.035 03:50:33 -- accel/accel.sh@129 -- # build_accel_config 00:06:59.035 03:50:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.035 03:50:33 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:59.035 03:50:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.035 03:50:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.035 03:50:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.035 03:50:33 -- common/autotest_common.sh@10 -- # set +x 00:06:59.035 03:50:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.035 03:50:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.035 03:50:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.035 03:50:33 -- accel/accel.sh@42 -- # jq -r . 00:06:59.035 ************************************ 00:06:59.035 START TEST accel_dif_functional_tests 00:06:59.035 ************************************ 00:06:59.035 03:50:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:59.035 [2024-11-08 03:50:33.974311] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.035 [2024-11-08 03:50:33.975125] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59598 ] 00:06:59.035 [2024-11-08 03:50:34.118786] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:59.294 [2024-11-08 03:50:34.225329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.294 [2024-11-08 03:50:34.225464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.294 [2024-11-08 03:50:34.225465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.294 00:06:59.294 00:06:59.294 CUnit - A unit testing framework for C - Version 2.1-3 00:06:59.294 http://cunit.sourceforge.net/ 00:06:59.294 00:06:59.294 00:06:59.294 Suite: accel_dif 00:06:59.294 Test: verify: DIF generated, GUARD check ...passed 00:06:59.294 Test: verify: DIF generated, APPTAG check ...passed 00:06:59.294 Test: verify: DIF generated, REFTAG check ...passed 00:06:59.294 Test: verify: DIF not generated, GUARD check ...[2024-11-08 03:50:34.318972] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:59.294 [2024-11-08 03:50:34.319046] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:59.294 passed 00:06:59.294 Test: verify: DIF not generated, APPTAG check ...passed 00:06:59.294 Test: verify: DIF not generated, REFTAG check ...[2024-11-08 03:50:34.319085] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:59.294 [2024-11-08 03:50:34.319226] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:59.294 passed 00:06:59.294 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:59.294 Test: verify: APPTAG incorrect, APPTAG check ...[2024-11-08 03:50:34.319266] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:59.294 [2024-11-08 03:50:34.319292] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:59.294 [2024-11-08 03:50:34.319404] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:59.294 passed 00:06:59.294 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:59.294 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:59.294 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:59.294 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:06:59.294 Test: generate copy: DIF generated, GUARD check ...[2024-11-08 03:50:34.319797] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:59.294 passed 00:06:59.294 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:59.294 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:59.294 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:59.294 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:59.294 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:59.294 Test: generate copy: iovecs-len validate ...passed 00:06:59.294 Test: generate copy: buffer alignment validate ...[2024-11-08 03:50:34.320213] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:59.294 passed 00:06:59.294 00:06:59.294 Run Summary: Type Total Ran Passed Failed Inactive 00:06:59.294 suites 1 1 n/a 0 0 00:06:59.294 tests 20 20 20 0 0 00:06:59.294 asserts 204 204 204 0 n/a 00:06:59.294 00:06:59.294 Elapsed time = 0.003 seconds 00:06:59.552 00:06:59.552 real 0m0.643s 00:06:59.552 user 0m0.847s 00:06:59.552 sys 0m0.156s 00:06:59.552 03:50:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:59.552 03:50:34 -- common/autotest_common.sh@10 -- # set +x 00:06:59.552 ************************************ 00:06:59.552 END TEST accel_dif_functional_tests 00:06:59.552 ************************************ 00:06:59.552 ************************************ 00:06:59.552 END TEST accel 00:06:59.552 ************************************ 00:06:59.552 00:06:59.552 real 1m4.805s 00:06:59.552 user 1m8.642s 00:06:59.552 sys 0m6.497s 00:06:59.552 03:50:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:59.552 03:50:34 -- common/autotest_common.sh@10 -- # set +x 00:06:59.552 03:50:34 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:59.552 03:50:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:59.552 03:50:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.552 03:50:34 -- common/autotest_common.sh@10 -- # set +x 00:06:59.552 ************************************ 00:06:59.552 START TEST accel_rpc 00:06:59.552 ************************************ 00:06:59.552 03:50:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:59.811 * Looking for test storage... 00:06:59.811 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:59.811 03:50:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:59.811 03:50:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:59.811 03:50:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:59.811 03:50:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:59.811 03:50:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:59.811 03:50:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:59.811 03:50:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:59.811 03:50:34 -- scripts/common.sh@335 -- # IFS=.-: 00:06:59.811 03:50:34 -- scripts/common.sh@335 -- # read -ra ver1 00:06:59.811 03:50:34 -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.811 03:50:34 -- scripts/common.sh@336 -- # read -ra ver2 00:06:59.811 03:50:34 -- scripts/common.sh@337 -- # local 'op=<' 00:06:59.811 03:50:34 -- scripts/common.sh@339 -- # ver1_l=2 00:06:59.811 03:50:34 -- scripts/common.sh@340 -- # ver2_l=1 00:06:59.811 03:50:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:59.811 03:50:34 -- scripts/common.sh@343 -- # case "$op" in 00:06:59.811 03:50:34 -- scripts/common.sh@344 -- # : 1 00:06:59.811 03:50:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:59.811 03:50:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.811 03:50:34 -- scripts/common.sh@364 -- # decimal 1 00:06:59.811 03:50:34 -- scripts/common.sh@352 -- # local d=1 00:06:59.811 03:50:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.811 03:50:34 -- scripts/common.sh@354 -- # echo 1 00:06:59.811 03:50:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:59.811 03:50:34 -- scripts/common.sh@365 -- # decimal 2 00:06:59.811 03:50:34 -- scripts/common.sh@352 -- # local d=2 00:06:59.811 03:50:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.811 03:50:34 -- scripts/common.sh@354 -- # echo 2 00:06:59.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.811 03:50:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:59.812 03:50:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:59.812 03:50:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:59.812 03:50:34 -- scripts/common.sh@367 -- # return 0 00:06:59.812 03:50:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.812 03:50:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:59.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.812 --rc genhtml_branch_coverage=1 00:06:59.812 --rc genhtml_function_coverage=1 00:06:59.812 --rc genhtml_legend=1 00:06:59.812 --rc geninfo_all_blocks=1 00:06:59.812 --rc geninfo_unexecuted_blocks=1 00:06:59.812 00:06:59.812 ' 00:06:59.812 03:50:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:59.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.812 --rc genhtml_branch_coverage=1 00:06:59.812 --rc genhtml_function_coverage=1 00:06:59.812 --rc genhtml_legend=1 00:06:59.812 --rc geninfo_all_blocks=1 00:06:59.812 --rc geninfo_unexecuted_blocks=1 00:06:59.812 00:06:59.812 ' 00:06:59.812 03:50:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:59.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.812 --rc genhtml_branch_coverage=1 00:06:59.812 --rc genhtml_function_coverage=1 00:06:59.812 --rc genhtml_legend=1 00:06:59.812 --rc geninfo_all_blocks=1 00:06:59.812 --rc geninfo_unexecuted_blocks=1 00:06:59.812 00:06:59.812 ' 00:06:59.812 03:50:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:59.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.812 --rc genhtml_branch_coverage=1 00:06:59.812 --rc genhtml_function_coverage=1 00:06:59.812 --rc genhtml_legend=1 00:06:59.812 --rc geninfo_all_blocks=1 00:06:59.812 --rc geninfo_unexecuted_blocks=1 00:06:59.812 00:06:59.812 ' 00:06:59.812 03:50:34 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:59.812 03:50:34 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=59675 00:06:59.812 03:50:34 -- accel/accel_rpc.sh@15 -- # waitforlisten 59675 00:06:59.812 03:50:34 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:59.812 03:50:34 -- common/autotest_common.sh@829 -- # '[' -z 59675 ']' 00:06:59.812 03:50:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.812 03:50:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.812 03:50:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.812 03:50:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.812 03:50:34 -- common/autotest_common.sh@10 -- # set +x 00:06:59.812 [2024-11-08 03:50:34.913863] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.812 [2024-11-08 03:50:34.914269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59675 ] 00:07:00.070 [2024-11-08 03:50:35.054810] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.070 [2024-11-08 03:50:35.168667] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:00.070 [2024-11-08 03:50:35.169177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.004 03:50:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.004 03:50:35 -- common/autotest_common.sh@862 -- # return 0 00:07:01.004 03:50:35 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:01.004 03:50:35 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:01.004 03:50:35 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:01.004 03:50:35 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:01.004 03:50:35 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:01.004 03:50:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:01.004 03:50:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.004 03:50:35 -- common/autotest_common.sh@10 -- # set +x 00:07:01.004 ************************************ 00:07:01.004 START TEST accel_assign_opcode 00:07:01.004 ************************************ 00:07:01.004 03:50:35 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:01.004 03:50:35 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:01.004 03:50:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.004 03:50:35 -- common/autotest_common.sh@10 -- # set +x 00:07:01.004 [2024-11-08 03:50:35.954007] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:01.004 03:50:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.004 03:50:35 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:01.004 03:50:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.005 03:50:35 -- common/autotest_common.sh@10 -- # set +x 00:07:01.005 [2024-11-08 03:50:35.961984] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:01.005 03:50:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.005 03:50:35 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:01.005 03:50:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.005 03:50:35 -- common/autotest_common.sh@10 -- # set +x 00:07:01.264 03:50:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.264 03:50:36 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:01.264 03:50:36 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:01.264 03:50:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.264 03:50:36 -- common/autotest_common.sh@10 -- # set +x 00:07:01.264 03:50:36 -- accel/accel_rpc.sh@42 -- # grep software 00:07:01.264 03:50:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.264 software 00:07:01.264 00:07:01.264 real 0m0.291s 00:07:01.264 user 0m0.056s 00:07:01.264 sys 0m0.010s 00:07:01.264 03:50:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:01.264 ************************************ 00:07:01.264 END TEST accel_assign_opcode 00:07:01.264 ************************************ 00:07:01.264 03:50:36 -- common/autotest_common.sh@10 -- # set +x 00:07:01.264 03:50:36 -- accel/accel_rpc.sh@55 -- # killprocess 59675 00:07:01.264 03:50:36 -- common/autotest_common.sh@936 -- # '[' -z 59675 ']' 00:07:01.264 03:50:36 -- common/autotest_common.sh@940 -- # kill -0 59675 00:07:01.264 03:50:36 -- common/autotest_common.sh@941 -- # uname 00:07:01.264 03:50:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:01.264 03:50:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59675 00:07:01.264 killing process with pid 59675 00:07:01.264 03:50:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:01.264 03:50:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:01.264 03:50:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59675' 00:07:01.264 03:50:36 -- common/autotest_common.sh@955 -- # kill 59675 00:07:01.264 03:50:36 -- common/autotest_common.sh@960 -- # wait 59675 00:07:01.856 ************************************ 00:07:01.856 END TEST accel_rpc 00:07:01.856 ************************************ 00:07:01.856 00:07:01.856 real 0m2.064s 00:07:01.856 user 0m2.188s 00:07:01.856 sys 0m0.482s 00:07:01.856 03:50:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:01.856 03:50:36 -- common/autotest_common.sh@10 -- # set +x 00:07:01.856 03:50:36 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:01.856 03:50:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:01.856 03:50:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.856 03:50:36 -- common/autotest_common.sh@10 -- # set +x 00:07:01.856 ************************************ 00:07:01.856 START TEST app_cmdline 00:07:01.856 ************************************ 00:07:01.856 03:50:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:01.856 * Looking for test storage... 00:07:01.856 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:01.856 03:50:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:01.856 03:50:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:01.856 03:50:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:01.856 03:50:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:01.856 03:50:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:01.856 03:50:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:01.856 03:50:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:01.856 03:50:36 -- scripts/common.sh@335 -- # IFS=.-: 00:07:01.856 03:50:36 -- scripts/common.sh@335 -- # read -ra ver1 00:07:01.856 03:50:36 -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.856 03:50:36 -- scripts/common.sh@336 -- # read -ra ver2 00:07:01.856 03:50:36 -- scripts/common.sh@337 -- # local 'op=<' 00:07:01.856 03:50:36 -- scripts/common.sh@339 -- # ver1_l=2 00:07:01.856 03:50:36 -- scripts/common.sh@340 -- # ver2_l=1 00:07:01.856 03:50:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:01.856 03:50:36 -- scripts/common.sh@343 -- # case "$op" in 00:07:01.856 03:50:36 -- scripts/common.sh@344 -- # : 1 00:07:01.856 03:50:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:01.856 03:50:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.856 03:50:36 -- scripts/common.sh@364 -- # decimal 1 00:07:01.856 03:50:36 -- scripts/common.sh@352 -- # local d=1 00:07:01.856 03:50:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.856 03:50:36 -- scripts/common.sh@354 -- # echo 1 00:07:01.856 03:50:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:01.856 03:50:36 -- scripts/common.sh@365 -- # decimal 2 00:07:01.856 03:50:36 -- scripts/common.sh@352 -- # local d=2 00:07:01.856 03:50:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.856 03:50:36 -- scripts/common.sh@354 -- # echo 2 00:07:01.856 03:50:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:01.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.856 03:50:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:01.856 03:50:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:01.856 03:50:36 -- scripts/common.sh@367 -- # return 0 00:07:01.856 03:50:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.856 03:50:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:01.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.856 --rc genhtml_branch_coverage=1 00:07:01.856 --rc genhtml_function_coverage=1 00:07:01.856 --rc genhtml_legend=1 00:07:01.856 --rc geninfo_all_blocks=1 00:07:01.856 --rc geninfo_unexecuted_blocks=1 00:07:01.856 00:07:01.856 ' 00:07:01.856 03:50:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:01.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.857 --rc genhtml_branch_coverage=1 00:07:01.857 --rc genhtml_function_coverage=1 00:07:01.857 --rc genhtml_legend=1 00:07:01.857 --rc geninfo_all_blocks=1 00:07:01.857 --rc geninfo_unexecuted_blocks=1 00:07:01.857 00:07:01.857 ' 00:07:01.857 03:50:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:01.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.857 --rc genhtml_branch_coverage=1 00:07:01.857 --rc genhtml_function_coverage=1 00:07:01.857 --rc genhtml_legend=1 00:07:01.857 --rc geninfo_all_blocks=1 00:07:01.857 --rc geninfo_unexecuted_blocks=1 00:07:01.857 00:07:01.857 ' 00:07:01.857 03:50:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:01.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.857 --rc genhtml_branch_coverage=1 00:07:01.857 --rc genhtml_function_coverage=1 00:07:01.857 --rc genhtml_legend=1 00:07:01.857 --rc geninfo_all_blocks=1 00:07:01.857 --rc geninfo_unexecuted_blocks=1 00:07:01.857 00:07:01.857 ' 00:07:01.857 03:50:36 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:01.857 03:50:36 -- app/cmdline.sh@17 -- # spdk_tgt_pid=59794 00:07:01.857 03:50:36 -- app/cmdline.sh@18 -- # waitforlisten 59794 00:07:01.857 03:50:36 -- common/autotest_common.sh@829 -- # '[' -z 59794 ']' 00:07:01.857 03:50:36 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:01.857 03:50:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.857 03:50:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.857 03:50:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.857 03:50:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.857 03:50:36 -- common/autotest_common.sh@10 -- # set +x 00:07:02.115 [2024-11-08 03:50:37.012575] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:02.115 [2024-11-08 03:50:37.012952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59794 ] 00:07:02.115 [2024-11-08 03:50:37.150977] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.373 [2024-11-08 03:50:37.239204] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:02.373 [2024-11-08 03:50:37.239667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.307 03:50:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:03.307 03:50:38 -- common/autotest_common.sh@862 -- # return 0 00:07:03.307 03:50:38 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:03.307 { 00:07:03.307 "fields": { 00:07:03.307 "commit": "c13c99a5e", 00:07:03.307 "major": 24, 00:07:03.307 "minor": 1, 00:07:03.307 "patch": 1, 00:07:03.307 "suffix": "-pre" 00:07:03.307 }, 00:07:03.307 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e" 00:07:03.307 } 00:07:03.307 03:50:38 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:03.307 03:50:38 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:03.307 03:50:38 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:03.307 03:50:38 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:03.307 03:50:38 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:03.307 03:50:38 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:03.307 03:50:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.307 03:50:38 -- common/autotest_common.sh@10 -- # set +x 00:07:03.307 03:50:38 -- app/cmdline.sh@26 -- # sort 00:07:03.307 03:50:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.307 03:50:38 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:03.307 03:50:38 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:03.307 03:50:38 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:03.307 03:50:38 -- common/autotest_common.sh@650 -- # local es=0 00:07:03.307 03:50:38 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:03.307 03:50:38 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:03.307 03:50:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.307 03:50:38 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:03.307 03:50:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.307 03:50:38 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:03.307 03:50:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.307 03:50:38 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:03.307 03:50:38 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:03.307 03:50:38 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:03.566 2024/11/08 03:50:38 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:07:03.566 request: 00:07:03.566 { 00:07:03.566 "method": "env_dpdk_get_mem_stats", 00:07:03.566 "params": {} 00:07:03.566 } 00:07:03.566 Got JSON-RPC error response 00:07:03.566 GoRPCClient: error on JSON-RPC call 00:07:03.566 03:50:38 -- common/autotest_common.sh@653 -- # es=1 00:07:03.566 03:50:38 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:03.566 03:50:38 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:03.566 03:50:38 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:03.566 03:50:38 -- app/cmdline.sh@1 -- # killprocess 59794 00:07:03.566 03:50:38 -- common/autotest_common.sh@936 -- # '[' -z 59794 ']' 00:07:03.566 03:50:38 -- common/autotest_common.sh@940 -- # kill -0 59794 00:07:03.566 03:50:38 -- common/autotest_common.sh@941 -- # uname 00:07:03.566 03:50:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:03.566 03:50:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59794 00:07:03.824 03:50:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:03.824 03:50:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:03.824 03:50:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59794' 00:07:03.824 killing process with pid 59794 00:07:03.825 03:50:38 -- common/autotest_common.sh@955 -- # kill 59794 00:07:03.825 03:50:38 -- common/autotest_common.sh@960 -- # wait 59794 00:07:04.083 00:07:04.083 real 0m2.346s 00:07:04.083 user 0m2.949s 00:07:04.083 sys 0m0.512s 00:07:04.083 03:50:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:04.083 03:50:39 -- common/autotest_common.sh@10 -- # set +x 00:07:04.083 ************************************ 00:07:04.083 END TEST app_cmdline 00:07:04.083 ************************************ 00:07:04.083 03:50:39 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:04.083 03:50:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:04.083 03:50:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:04.083 03:50:39 -- common/autotest_common.sh@10 -- # set +x 00:07:04.083 ************************************ 00:07:04.083 START TEST version 00:07:04.083 ************************************ 00:07:04.083 03:50:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:04.341 * Looking for test storage... 00:07:04.341 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:04.341 03:50:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:04.341 03:50:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:04.341 03:50:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:04.341 03:50:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:04.341 03:50:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:04.341 03:50:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:04.341 03:50:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:04.341 03:50:39 -- scripts/common.sh@335 -- # IFS=.-: 00:07:04.341 03:50:39 -- scripts/common.sh@335 -- # read -ra ver1 00:07:04.341 03:50:39 -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.341 03:50:39 -- scripts/common.sh@336 -- # read -ra ver2 00:07:04.341 03:50:39 -- scripts/common.sh@337 -- # local 'op=<' 00:07:04.341 03:50:39 -- scripts/common.sh@339 -- # ver1_l=2 00:07:04.341 03:50:39 -- scripts/common.sh@340 -- # ver2_l=1 00:07:04.341 03:50:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:04.341 03:50:39 -- scripts/common.sh@343 -- # case "$op" in 00:07:04.341 03:50:39 -- scripts/common.sh@344 -- # : 1 00:07:04.341 03:50:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:04.341 03:50:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.341 03:50:39 -- scripts/common.sh@364 -- # decimal 1 00:07:04.341 03:50:39 -- scripts/common.sh@352 -- # local d=1 00:07:04.341 03:50:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.342 03:50:39 -- scripts/common.sh@354 -- # echo 1 00:07:04.342 03:50:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:04.342 03:50:39 -- scripts/common.sh@365 -- # decimal 2 00:07:04.342 03:50:39 -- scripts/common.sh@352 -- # local d=2 00:07:04.342 03:50:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.342 03:50:39 -- scripts/common.sh@354 -- # echo 2 00:07:04.342 03:50:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:04.342 03:50:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:04.342 03:50:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:04.342 03:50:39 -- scripts/common.sh@367 -- # return 0 00:07:04.342 03:50:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.342 03:50:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:04.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.342 --rc genhtml_branch_coverage=1 00:07:04.342 --rc genhtml_function_coverage=1 00:07:04.342 --rc genhtml_legend=1 00:07:04.342 --rc geninfo_all_blocks=1 00:07:04.342 --rc geninfo_unexecuted_blocks=1 00:07:04.342 00:07:04.342 ' 00:07:04.342 03:50:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:04.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.342 --rc genhtml_branch_coverage=1 00:07:04.342 --rc genhtml_function_coverage=1 00:07:04.342 --rc genhtml_legend=1 00:07:04.342 --rc geninfo_all_blocks=1 00:07:04.342 --rc geninfo_unexecuted_blocks=1 00:07:04.342 00:07:04.342 ' 00:07:04.342 03:50:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:04.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.342 --rc genhtml_branch_coverage=1 00:07:04.342 --rc genhtml_function_coverage=1 00:07:04.342 --rc genhtml_legend=1 00:07:04.342 --rc geninfo_all_blocks=1 00:07:04.342 --rc geninfo_unexecuted_blocks=1 00:07:04.342 00:07:04.342 ' 00:07:04.342 03:50:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:04.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.342 --rc genhtml_branch_coverage=1 00:07:04.342 --rc genhtml_function_coverage=1 00:07:04.342 --rc genhtml_legend=1 00:07:04.342 --rc geninfo_all_blocks=1 00:07:04.342 --rc geninfo_unexecuted_blocks=1 00:07:04.342 00:07:04.342 ' 00:07:04.342 03:50:39 -- app/version.sh@17 -- # get_header_version major 00:07:04.342 03:50:39 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:04.342 03:50:39 -- app/version.sh@14 -- # cut -f2 00:07:04.342 03:50:39 -- app/version.sh@14 -- # tr -d '"' 00:07:04.342 03:50:39 -- app/version.sh@17 -- # major=24 00:07:04.342 03:50:39 -- app/version.sh@18 -- # get_header_version minor 00:07:04.342 03:50:39 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:04.342 03:50:39 -- app/version.sh@14 -- # cut -f2 00:07:04.342 03:50:39 -- app/version.sh@14 -- # tr -d '"' 00:07:04.342 03:50:39 -- app/version.sh@18 -- # minor=1 00:07:04.342 03:50:39 -- app/version.sh@19 -- # get_header_version patch 00:07:04.342 03:50:39 -- app/version.sh@14 -- # cut -f2 00:07:04.342 03:50:39 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:04.342 03:50:39 -- app/version.sh@14 -- # tr -d '"' 00:07:04.342 03:50:39 -- app/version.sh@19 -- # patch=1 00:07:04.342 03:50:39 -- app/version.sh@20 -- # get_header_version suffix 00:07:04.342 03:50:39 -- app/version.sh@14 -- # cut -f2 00:07:04.342 03:50:39 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:04.342 03:50:39 -- app/version.sh@14 -- # tr -d '"' 00:07:04.342 03:50:39 -- app/version.sh@20 -- # suffix=-pre 00:07:04.342 03:50:39 -- app/version.sh@22 -- # version=24.1 00:07:04.342 03:50:39 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:04.342 03:50:39 -- app/version.sh@25 -- # version=24.1.1 00:07:04.342 03:50:39 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:04.342 03:50:39 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:04.342 03:50:39 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:04.342 03:50:39 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:04.342 03:50:39 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:04.342 00:07:04.342 real 0m0.247s 00:07:04.342 user 0m0.166s 00:07:04.342 sys 0m0.119s 00:07:04.342 03:50:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:04.342 03:50:39 -- common/autotest_common.sh@10 -- # set +x 00:07:04.342 ************************************ 00:07:04.342 END TEST version 00:07:04.342 ************************************ 00:07:04.601 03:50:39 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:04.601 03:50:39 -- spdk/autotest.sh@191 -- # uname -s 00:07:04.601 03:50:39 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:04.601 03:50:39 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:04.601 03:50:39 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:04.601 03:50:39 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:07:04.601 03:50:39 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:07:04.601 03:50:39 -- spdk/autotest.sh@255 -- # timing_exit lib 00:07:04.601 03:50:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:04.601 03:50:39 -- common/autotest_common.sh@10 -- # set +x 00:07:04.601 03:50:39 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:07:04.601 03:50:39 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:07:04.601 03:50:39 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:07:04.601 03:50:39 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:07:04.601 03:50:39 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:07:04.601 03:50:39 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:07:04.601 03:50:39 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:04.601 03:50:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:04.601 03:50:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:04.601 03:50:39 -- common/autotest_common.sh@10 -- # set +x 00:07:04.601 ************************************ 00:07:04.601 START TEST nvmf_tcp 00:07:04.601 ************************************ 00:07:04.601 03:50:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:04.601 * Looking for test storage... 00:07:04.601 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:04.601 03:50:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:04.601 03:50:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:04.601 03:50:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:04.601 03:50:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:04.601 03:50:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:04.601 03:50:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:04.601 03:50:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:04.601 03:50:39 -- scripts/common.sh@335 -- # IFS=.-: 00:07:04.601 03:50:39 -- scripts/common.sh@335 -- # read -ra ver1 00:07:04.601 03:50:39 -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.601 03:50:39 -- scripts/common.sh@336 -- # read -ra ver2 00:07:04.601 03:50:39 -- scripts/common.sh@337 -- # local 'op=<' 00:07:04.601 03:50:39 -- scripts/common.sh@339 -- # ver1_l=2 00:07:04.601 03:50:39 -- scripts/common.sh@340 -- # ver2_l=1 00:07:04.601 03:50:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:04.601 03:50:39 -- scripts/common.sh@343 -- # case "$op" in 00:07:04.601 03:50:39 -- scripts/common.sh@344 -- # : 1 00:07:04.601 03:50:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:04.601 03:50:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.601 03:50:39 -- scripts/common.sh@364 -- # decimal 1 00:07:04.601 03:50:39 -- scripts/common.sh@352 -- # local d=1 00:07:04.601 03:50:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.601 03:50:39 -- scripts/common.sh@354 -- # echo 1 00:07:04.601 03:50:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:04.601 03:50:39 -- scripts/common.sh@365 -- # decimal 2 00:07:04.601 03:50:39 -- scripts/common.sh@352 -- # local d=2 00:07:04.601 03:50:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.601 03:50:39 -- scripts/common.sh@354 -- # echo 2 00:07:04.601 03:50:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:04.601 03:50:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:04.601 03:50:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:04.601 03:50:39 -- scripts/common.sh@367 -- # return 0 00:07:04.601 03:50:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.601 03:50:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:04.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.601 --rc genhtml_branch_coverage=1 00:07:04.601 --rc genhtml_function_coverage=1 00:07:04.601 --rc genhtml_legend=1 00:07:04.601 --rc geninfo_all_blocks=1 00:07:04.601 --rc geninfo_unexecuted_blocks=1 00:07:04.601 00:07:04.601 ' 00:07:04.601 03:50:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:04.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.601 --rc genhtml_branch_coverage=1 00:07:04.601 --rc genhtml_function_coverage=1 00:07:04.601 --rc genhtml_legend=1 00:07:04.601 --rc geninfo_all_blocks=1 00:07:04.601 --rc geninfo_unexecuted_blocks=1 00:07:04.601 00:07:04.601 ' 00:07:04.601 03:50:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:04.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.601 --rc genhtml_branch_coverage=1 00:07:04.601 --rc genhtml_function_coverage=1 00:07:04.601 --rc genhtml_legend=1 00:07:04.601 --rc geninfo_all_blocks=1 00:07:04.601 --rc geninfo_unexecuted_blocks=1 00:07:04.601 00:07:04.601 ' 00:07:04.601 03:50:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:04.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.601 --rc genhtml_branch_coverage=1 00:07:04.601 --rc genhtml_function_coverage=1 00:07:04.601 --rc genhtml_legend=1 00:07:04.601 --rc geninfo_all_blocks=1 00:07:04.601 --rc geninfo_unexecuted_blocks=1 00:07:04.601 00:07:04.601 ' 00:07:04.601 03:50:39 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:04.601 03:50:39 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:04.601 03:50:39 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:04.601 03:50:39 -- nvmf/common.sh@7 -- # uname -s 00:07:04.601 03:50:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:04.601 03:50:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:04.601 03:50:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:04.601 03:50:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:04.601 03:50:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:04.601 03:50:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:04.601 03:50:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:04.601 03:50:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:04.601 03:50:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:04.601 03:50:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:04.601 03:50:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:07:04.601 03:50:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:07:04.601 03:50:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:04.601 03:50:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:04.601 03:50:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:04.601 03:50:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:04.601 03:50:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:04.601 03:50:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.601 03:50:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.601 03:50:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.601 03:50:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.601 03:50:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.601 03:50:39 -- paths/export.sh@5 -- # export PATH 00:07:04.601 03:50:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.601 03:50:39 -- nvmf/common.sh@46 -- # : 0 00:07:04.601 03:50:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:04.601 03:50:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:04.601 03:50:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:04.601 03:50:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:04.601 03:50:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:04.602 03:50:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:04.602 03:50:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:04.602 03:50:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:04.860 03:50:39 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:04.860 03:50:39 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:04.860 03:50:39 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:04.860 03:50:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:04.860 03:50:39 -- common/autotest_common.sh@10 -- # set +x 00:07:04.860 03:50:39 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:04.860 03:50:39 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:04.860 03:50:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:04.860 03:50:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:04.860 03:50:39 -- common/autotest_common.sh@10 -- # set +x 00:07:04.860 ************************************ 00:07:04.860 START TEST nvmf_example 00:07:04.860 ************************************ 00:07:04.860 03:50:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:04.860 * Looking for test storage... 00:07:04.860 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:04.860 03:50:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:04.860 03:50:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:04.860 03:50:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:04.860 03:50:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:04.860 03:50:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:04.860 03:50:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:04.860 03:50:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:04.860 03:50:39 -- scripts/common.sh@335 -- # IFS=.-: 00:07:04.860 03:50:39 -- scripts/common.sh@335 -- # read -ra ver1 00:07:04.860 03:50:39 -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.860 03:50:39 -- scripts/common.sh@336 -- # read -ra ver2 00:07:04.860 03:50:39 -- scripts/common.sh@337 -- # local 'op=<' 00:07:04.860 03:50:39 -- scripts/common.sh@339 -- # ver1_l=2 00:07:04.860 03:50:39 -- scripts/common.sh@340 -- # ver2_l=1 00:07:04.860 03:50:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:04.860 03:50:39 -- scripts/common.sh@343 -- # case "$op" in 00:07:04.860 03:50:39 -- scripts/common.sh@344 -- # : 1 00:07:04.860 03:50:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:04.860 03:50:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.860 03:50:39 -- scripts/common.sh@364 -- # decimal 1 00:07:04.860 03:50:39 -- scripts/common.sh@352 -- # local d=1 00:07:04.860 03:50:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.860 03:50:39 -- scripts/common.sh@354 -- # echo 1 00:07:04.860 03:50:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:04.860 03:50:39 -- scripts/common.sh@365 -- # decimal 2 00:07:04.860 03:50:39 -- scripts/common.sh@352 -- # local d=2 00:07:04.860 03:50:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.860 03:50:39 -- scripts/common.sh@354 -- # echo 2 00:07:04.860 03:50:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:04.860 03:50:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:04.860 03:50:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:04.860 03:50:39 -- scripts/common.sh@367 -- # return 0 00:07:04.860 03:50:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.860 03:50:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:04.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.860 --rc genhtml_branch_coverage=1 00:07:04.860 --rc genhtml_function_coverage=1 00:07:04.860 --rc genhtml_legend=1 00:07:04.860 --rc geninfo_all_blocks=1 00:07:04.860 --rc geninfo_unexecuted_blocks=1 00:07:04.860 00:07:04.860 ' 00:07:04.860 03:50:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:04.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.860 --rc genhtml_branch_coverage=1 00:07:04.860 --rc genhtml_function_coverage=1 00:07:04.860 --rc genhtml_legend=1 00:07:04.860 --rc geninfo_all_blocks=1 00:07:04.860 --rc geninfo_unexecuted_blocks=1 00:07:04.860 00:07:04.860 ' 00:07:04.860 03:50:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:04.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.860 --rc genhtml_branch_coverage=1 00:07:04.860 --rc genhtml_function_coverage=1 00:07:04.860 --rc genhtml_legend=1 00:07:04.860 --rc geninfo_all_blocks=1 00:07:04.860 --rc geninfo_unexecuted_blocks=1 00:07:04.860 00:07:04.860 ' 00:07:04.860 03:50:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:04.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.860 --rc genhtml_branch_coverage=1 00:07:04.860 --rc genhtml_function_coverage=1 00:07:04.860 --rc genhtml_legend=1 00:07:04.860 --rc geninfo_all_blocks=1 00:07:04.860 --rc geninfo_unexecuted_blocks=1 00:07:04.860 00:07:04.860 ' 00:07:04.860 03:50:39 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:04.860 03:50:39 -- nvmf/common.sh@7 -- # uname -s 00:07:04.860 03:50:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:04.860 03:50:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:04.860 03:50:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:04.860 03:50:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:04.860 03:50:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:04.860 03:50:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:04.860 03:50:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:04.860 03:50:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:04.860 03:50:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:04.860 03:50:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:04.860 03:50:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:07:04.860 03:50:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:07:04.860 03:50:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:04.860 03:50:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:04.860 03:50:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:04.860 03:50:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:04.860 03:50:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:04.860 03:50:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.860 03:50:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.860 03:50:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.860 03:50:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.860 03:50:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.860 03:50:39 -- paths/export.sh@5 -- # export PATH 00:07:04.860 03:50:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.860 03:50:39 -- nvmf/common.sh@46 -- # : 0 00:07:04.860 03:50:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:04.860 03:50:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:04.860 03:50:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:04.860 03:50:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:04.860 03:50:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:04.860 03:50:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:04.860 03:50:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:04.860 03:50:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:04.860 03:50:39 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:04.860 03:50:39 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:04.860 03:50:39 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:04.860 03:50:39 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:04.860 03:50:39 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:04.860 03:50:39 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:04.860 03:50:39 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:04.860 03:50:39 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:04.861 03:50:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:04.861 03:50:39 -- common/autotest_common.sh@10 -- # set +x 00:07:04.861 03:50:39 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:04.861 03:50:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:04.861 03:50:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:04.861 03:50:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:04.861 03:50:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:04.861 03:50:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:04.861 03:50:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:04.861 03:50:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:04.861 03:50:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:04.861 03:50:39 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:04.861 03:50:39 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:04.861 03:50:39 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:04.861 03:50:39 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:04.861 03:50:39 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:04.861 03:50:39 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:04.861 03:50:39 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:04.861 03:50:39 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:04.861 03:50:39 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:04.861 03:50:39 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:04.861 03:50:39 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:04.861 03:50:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:04.861 03:50:39 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:04.861 03:50:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:04.861 03:50:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:04.861 03:50:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:04.861 03:50:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:04.861 03:50:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:04.861 03:50:39 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:04.861 Cannot find device "nvmf_init_br" 00:07:04.861 03:50:39 -- nvmf/common.sh@153 -- # true 00:07:04.861 03:50:39 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:04.861 Cannot find device "nvmf_tgt_br" 00:07:04.861 03:50:39 -- nvmf/common.sh@154 -- # true 00:07:04.861 03:50:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:04.861 Cannot find device "nvmf_tgt_br2" 00:07:04.861 03:50:39 -- nvmf/common.sh@155 -- # true 00:07:04.861 03:50:39 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:04.861 Cannot find device "nvmf_init_br" 00:07:05.119 03:50:39 -- nvmf/common.sh@156 -- # true 00:07:05.119 03:50:39 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:05.119 Cannot find device "nvmf_tgt_br" 00:07:05.119 03:50:39 -- nvmf/common.sh@157 -- # true 00:07:05.119 03:50:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:05.119 Cannot find device "nvmf_tgt_br2" 00:07:05.119 03:50:39 -- nvmf/common.sh@158 -- # true 00:07:05.119 03:50:39 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:05.119 Cannot find device "nvmf_br" 00:07:05.119 03:50:40 -- nvmf/common.sh@159 -- # true 00:07:05.119 03:50:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:05.119 Cannot find device "nvmf_init_if" 00:07:05.119 03:50:40 -- nvmf/common.sh@160 -- # true 00:07:05.119 03:50:40 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:05.119 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:05.119 03:50:40 -- nvmf/common.sh@161 -- # true 00:07:05.119 03:50:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:05.119 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:05.119 03:50:40 -- nvmf/common.sh@162 -- # true 00:07:05.119 03:50:40 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:05.119 03:50:40 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:05.119 03:50:40 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:05.119 03:50:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:05.119 03:50:40 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:05.119 03:50:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:05.119 03:50:40 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:05.119 03:50:40 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:05.119 03:50:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:05.119 03:50:40 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:05.119 03:50:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:05.119 03:50:40 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:05.119 03:50:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:05.119 03:50:40 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:05.119 03:50:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:05.119 03:50:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:05.119 03:50:40 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:05.119 03:50:40 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:05.119 03:50:40 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:05.377 03:50:40 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:05.377 03:50:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:05.377 03:50:40 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:05.377 03:50:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:05.377 03:50:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:05.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:05.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:07:05.377 00:07:05.377 --- 10.0.0.2 ping statistics --- 00:07:05.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.377 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:07:05.377 03:50:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:05.377 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:05.377 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:07:05.377 00:07:05.377 --- 10.0.0.3 ping statistics --- 00:07:05.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.377 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:07:05.377 03:50:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:05.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:05.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:07:05.377 00:07:05.377 --- 10.0.0.1 ping statistics --- 00:07:05.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.377 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:07:05.377 03:50:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:05.377 03:50:40 -- nvmf/common.sh@421 -- # return 0 00:07:05.377 03:50:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:05.377 03:50:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:05.377 03:50:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:05.377 03:50:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:05.377 03:50:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:05.377 03:50:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:05.377 03:50:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:05.377 03:50:40 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:05.377 03:50:40 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:05.377 03:50:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:05.377 03:50:40 -- common/autotest_common.sh@10 -- # set +x 00:07:05.377 03:50:40 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:05.377 03:50:40 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:05.377 03:50:40 -- target/nvmf_example.sh@34 -- # nvmfpid=60164 00:07:05.377 03:50:40 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:05.377 03:50:40 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:05.377 03:50:40 -- target/nvmf_example.sh@36 -- # waitforlisten 60164 00:07:05.377 03:50:40 -- common/autotest_common.sh@829 -- # '[' -z 60164 ']' 00:07:05.377 03:50:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.377 03:50:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:05.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.377 03:50:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.377 03:50:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:05.377 03:50:40 -- common/autotest_common.sh@10 -- # set +x 00:07:06.312 03:50:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.312 03:50:41 -- common/autotest_common.sh@862 -- # return 0 00:07:06.312 03:50:41 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:06.312 03:50:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:06.312 03:50:41 -- common/autotest_common.sh@10 -- # set +x 00:07:06.570 03:50:41 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:06.570 03:50:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.570 03:50:41 -- common/autotest_common.sh@10 -- # set +x 00:07:06.570 03:50:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.570 03:50:41 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:06.570 03:50:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.570 03:50:41 -- common/autotest_common.sh@10 -- # set +x 00:07:06.570 03:50:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.570 03:50:41 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:06.570 03:50:41 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:06.570 03:50:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.570 03:50:41 -- common/autotest_common.sh@10 -- # set +x 00:07:06.570 03:50:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.570 03:50:41 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:06.570 03:50:41 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:06.570 03:50:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.570 03:50:41 -- common/autotest_common.sh@10 -- # set +x 00:07:06.570 03:50:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.570 03:50:41 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:06.570 03:50:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.570 03:50:41 -- common/autotest_common.sh@10 -- # set +x 00:07:06.570 03:50:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.570 03:50:41 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:07:06.570 03:50:41 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:18.805 Initializing NVMe Controllers 00:07:18.805 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:18.805 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:18.805 Initialization complete. Launching workers. 00:07:18.805 ======================================================== 00:07:18.805 Latency(us) 00:07:18.805 Device Information : IOPS MiB/s Average min max 00:07:18.805 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15969.51 62.38 4007.57 706.01 23562.89 00:07:18.805 ======================================================== 00:07:18.805 Total : 15969.51 62.38 4007.57 706.01 23562.89 00:07:18.805 00:07:18.805 03:50:51 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:18.805 03:50:51 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:18.805 03:50:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:18.805 03:50:51 -- nvmf/common.sh@116 -- # sync 00:07:18.805 03:50:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:18.805 03:50:51 -- nvmf/common.sh@119 -- # set +e 00:07:18.805 03:50:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:18.805 03:50:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:18.805 rmmod nvme_tcp 00:07:18.805 rmmod nvme_fabrics 00:07:18.805 rmmod nvme_keyring 00:07:18.805 03:50:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:18.805 03:50:51 -- nvmf/common.sh@123 -- # set -e 00:07:18.805 03:50:51 -- nvmf/common.sh@124 -- # return 0 00:07:18.805 03:50:51 -- nvmf/common.sh@477 -- # '[' -n 60164 ']' 00:07:18.805 03:50:51 -- nvmf/common.sh@478 -- # killprocess 60164 00:07:18.805 03:50:51 -- common/autotest_common.sh@936 -- # '[' -z 60164 ']' 00:07:18.805 03:50:51 -- common/autotest_common.sh@940 -- # kill -0 60164 00:07:18.805 03:50:51 -- common/autotest_common.sh@941 -- # uname 00:07:18.805 03:50:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:18.805 03:50:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60164 00:07:18.805 killing process with pid 60164 00:07:18.805 03:50:51 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:07:18.805 03:50:51 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:07:18.805 03:50:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60164' 00:07:18.805 03:50:51 -- common/autotest_common.sh@955 -- # kill 60164 00:07:18.805 03:50:51 -- common/autotest_common.sh@960 -- # wait 60164 00:07:18.805 nvmf threads initialize successfully 00:07:18.805 bdev subsystem init successfully 00:07:18.805 created a nvmf target service 00:07:18.805 create targets's poll groups done 00:07:18.805 all subsystems of target started 00:07:18.805 nvmf target is running 00:07:18.805 all subsystems of target stopped 00:07:18.805 destroy targets's poll groups done 00:07:18.805 destroyed the nvmf target service 00:07:18.805 bdev subsystem finish successfully 00:07:18.805 nvmf threads destroy successfully 00:07:18.805 03:50:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:18.805 03:50:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:18.805 03:50:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:18.805 03:50:52 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:18.805 03:50:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:18.806 03:50:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.806 03:50:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:18.806 03:50:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.806 03:50:52 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:18.806 03:50:52 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:18.806 03:50:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:18.806 03:50:52 -- common/autotest_common.sh@10 -- # set +x 00:07:18.806 00:07:18.806 real 0m12.496s 00:07:18.806 user 0m44.716s 00:07:18.806 sys 0m1.975s 00:07:18.806 03:50:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:18.806 03:50:52 -- common/autotest_common.sh@10 -- # set +x 00:07:18.806 ************************************ 00:07:18.806 END TEST nvmf_example 00:07:18.806 ************************************ 00:07:18.806 03:50:52 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:18.806 03:50:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:18.806 03:50:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.806 03:50:52 -- common/autotest_common.sh@10 -- # set +x 00:07:18.806 ************************************ 00:07:18.806 START TEST nvmf_filesystem 00:07:18.806 ************************************ 00:07:18.806 03:50:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:18.806 * Looking for test storage... 00:07:18.806 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:18.806 03:50:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:18.806 03:50:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:18.806 03:50:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:18.806 03:50:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:18.806 03:50:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:18.806 03:50:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:18.806 03:50:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:18.806 03:50:52 -- scripts/common.sh@335 -- # IFS=.-: 00:07:18.806 03:50:52 -- scripts/common.sh@335 -- # read -ra ver1 00:07:18.806 03:50:52 -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.806 03:50:52 -- scripts/common.sh@336 -- # read -ra ver2 00:07:18.806 03:50:52 -- scripts/common.sh@337 -- # local 'op=<' 00:07:18.806 03:50:52 -- scripts/common.sh@339 -- # ver1_l=2 00:07:18.806 03:50:52 -- scripts/common.sh@340 -- # ver2_l=1 00:07:18.806 03:50:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:18.806 03:50:52 -- scripts/common.sh@343 -- # case "$op" in 00:07:18.806 03:50:52 -- scripts/common.sh@344 -- # : 1 00:07:18.806 03:50:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:18.806 03:50:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.806 03:50:52 -- scripts/common.sh@364 -- # decimal 1 00:07:18.806 03:50:52 -- scripts/common.sh@352 -- # local d=1 00:07:18.806 03:50:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.806 03:50:52 -- scripts/common.sh@354 -- # echo 1 00:07:18.806 03:50:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:18.806 03:50:52 -- scripts/common.sh@365 -- # decimal 2 00:07:18.806 03:50:52 -- scripts/common.sh@352 -- # local d=2 00:07:18.806 03:50:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.806 03:50:52 -- scripts/common.sh@354 -- # echo 2 00:07:18.806 03:50:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:18.806 03:50:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:18.806 03:50:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:18.806 03:50:52 -- scripts/common.sh@367 -- # return 0 00:07:18.806 03:50:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.806 03:50:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:18.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.806 --rc genhtml_branch_coverage=1 00:07:18.806 --rc genhtml_function_coverage=1 00:07:18.806 --rc genhtml_legend=1 00:07:18.806 --rc geninfo_all_blocks=1 00:07:18.806 --rc geninfo_unexecuted_blocks=1 00:07:18.806 00:07:18.806 ' 00:07:18.806 03:50:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:18.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.806 --rc genhtml_branch_coverage=1 00:07:18.806 --rc genhtml_function_coverage=1 00:07:18.806 --rc genhtml_legend=1 00:07:18.806 --rc geninfo_all_blocks=1 00:07:18.806 --rc geninfo_unexecuted_blocks=1 00:07:18.806 00:07:18.806 ' 00:07:18.806 03:50:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:18.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.806 --rc genhtml_branch_coverage=1 00:07:18.806 --rc genhtml_function_coverage=1 00:07:18.806 --rc genhtml_legend=1 00:07:18.806 --rc geninfo_all_blocks=1 00:07:18.806 --rc geninfo_unexecuted_blocks=1 00:07:18.806 00:07:18.806 ' 00:07:18.806 03:50:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:18.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.806 --rc genhtml_branch_coverage=1 00:07:18.806 --rc genhtml_function_coverage=1 00:07:18.806 --rc genhtml_legend=1 00:07:18.806 --rc geninfo_all_blocks=1 00:07:18.806 --rc geninfo_unexecuted_blocks=1 00:07:18.806 00:07:18.806 ' 00:07:18.806 03:50:52 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:07:18.806 03:50:52 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:18.806 03:50:52 -- common/autotest_common.sh@34 -- # set -e 00:07:18.806 03:50:52 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:18.806 03:50:52 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:18.806 03:50:52 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:18.806 03:50:52 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:18.806 03:50:52 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:18.806 03:50:52 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:18.806 03:50:52 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:18.806 03:50:52 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:18.806 03:50:52 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:18.806 03:50:52 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:18.806 03:50:52 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:18.806 03:50:52 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:18.806 03:50:52 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:18.806 03:50:52 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:18.806 03:50:52 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:18.806 03:50:52 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:18.806 03:50:52 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:18.806 03:50:52 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:18.806 03:50:52 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:18.806 03:50:52 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:18.806 03:50:52 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:18.806 03:50:52 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:18.806 03:50:52 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:18.806 03:50:52 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:18.806 03:50:52 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:18.806 03:50:52 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:18.806 03:50:52 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:18.806 03:50:52 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:18.806 03:50:52 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:18.806 03:50:52 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:18.806 03:50:52 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:18.806 03:50:52 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:18.806 03:50:52 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:18.806 03:50:52 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:18.806 03:50:52 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:18.806 03:50:52 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:18.806 03:50:52 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:18.806 03:50:52 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:18.806 03:50:52 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:18.806 03:50:52 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:07:18.806 03:50:52 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:18.806 03:50:52 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:18.806 03:50:52 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:18.806 03:50:52 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:18.806 03:50:52 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:18.806 03:50:52 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:18.806 03:50:52 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:18.806 03:50:52 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:18.806 03:50:52 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:18.806 03:50:52 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:18.806 03:50:52 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:18.806 03:50:52 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:18.806 03:50:52 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:18.806 03:50:52 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:18.806 03:50:52 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:07:18.806 03:50:52 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:18.806 03:50:52 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:07:18.806 03:50:52 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:18.806 03:50:52 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:18.806 03:50:52 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:18.806 03:50:52 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:18.807 03:50:52 -- common/build_config.sh@58 -- # CONFIG_GOLANG=y 00:07:18.807 03:50:52 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:18.807 03:50:52 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:18.807 03:50:52 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:07:18.807 03:50:52 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:18.807 03:50:52 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:18.807 03:50:52 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:18.807 03:50:52 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:18.807 03:50:52 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:18.807 03:50:52 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:18.807 03:50:52 -- common/build_config.sh@68 -- # CONFIG_AVAHI=y 00:07:18.807 03:50:52 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:18.807 03:50:52 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:18.807 03:50:52 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:18.807 03:50:52 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:18.807 03:50:52 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:18.807 03:50:52 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:18.807 03:50:52 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:18.807 03:50:52 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:18.807 03:50:52 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:18.807 03:50:52 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:18.807 03:50:52 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:07:18.807 03:50:52 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:18.807 03:50:52 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:18.807 03:50:52 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:07:18.807 03:50:52 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:07:18.807 03:50:52 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:07:18.807 03:50:52 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:07:18.807 03:50:52 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:07:18.807 03:50:52 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:07:18.807 03:50:52 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:18.807 03:50:52 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:18.807 03:50:52 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:18.807 03:50:52 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:18.807 03:50:52 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:18.807 03:50:52 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:18.807 03:50:52 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:07:18.807 03:50:52 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:18.807 #define SPDK_CONFIG_H 00:07:18.807 #define SPDK_CONFIG_APPS 1 00:07:18.807 #define SPDK_CONFIG_ARCH native 00:07:18.807 #undef SPDK_CONFIG_ASAN 00:07:18.807 #define SPDK_CONFIG_AVAHI 1 00:07:18.807 #undef SPDK_CONFIG_CET 00:07:18.807 #define SPDK_CONFIG_COVERAGE 1 00:07:18.807 #define SPDK_CONFIG_CROSS_PREFIX 00:07:18.807 #undef SPDK_CONFIG_CRYPTO 00:07:18.807 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:18.807 #undef SPDK_CONFIG_CUSTOMOCF 00:07:18.807 #undef SPDK_CONFIG_DAOS 00:07:18.807 #define SPDK_CONFIG_DAOS_DIR 00:07:18.807 #define SPDK_CONFIG_DEBUG 1 00:07:18.807 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:18.807 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:07:18.807 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:18.807 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:18.807 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:18.807 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:18.807 #define SPDK_CONFIG_EXAMPLES 1 00:07:18.807 #undef SPDK_CONFIG_FC 00:07:18.807 #define SPDK_CONFIG_FC_PATH 00:07:18.807 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:18.807 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:18.807 #undef SPDK_CONFIG_FUSE 00:07:18.807 #undef SPDK_CONFIG_FUZZER 00:07:18.807 #define SPDK_CONFIG_FUZZER_LIB 00:07:18.807 #define SPDK_CONFIG_GOLANG 1 00:07:18.807 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:18.807 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:18.807 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:18.807 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:18.807 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:18.807 #define SPDK_CONFIG_IDXD 1 00:07:18.807 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:18.807 #undef SPDK_CONFIG_IPSEC_MB 00:07:18.807 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:18.807 #define SPDK_CONFIG_ISAL 1 00:07:18.807 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:18.807 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:18.807 #define SPDK_CONFIG_LIBDIR 00:07:18.807 #undef SPDK_CONFIG_LTO 00:07:18.807 #define SPDK_CONFIG_MAX_LCORES 00:07:18.807 #define SPDK_CONFIG_NVME_CUSE 1 00:07:18.807 #undef SPDK_CONFIG_OCF 00:07:18.807 #define SPDK_CONFIG_OCF_PATH 00:07:18.807 #define SPDK_CONFIG_OPENSSL_PATH 00:07:18.807 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:18.807 #undef SPDK_CONFIG_PGO_USE 00:07:18.807 #define SPDK_CONFIG_PREFIX /usr/local 00:07:18.807 #undef SPDK_CONFIG_RAID5F 00:07:18.807 #undef SPDK_CONFIG_RBD 00:07:18.807 #define SPDK_CONFIG_RDMA 1 00:07:18.807 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:18.807 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:18.807 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:18.807 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:18.807 #define SPDK_CONFIG_SHARED 1 00:07:18.807 #undef SPDK_CONFIG_SMA 00:07:18.807 #define SPDK_CONFIG_TESTS 1 00:07:18.807 #undef SPDK_CONFIG_TSAN 00:07:18.807 #define SPDK_CONFIG_UBLK 1 00:07:18.807 #define SPDK_CONFIG_UBSAN 1 00:07:18.807 #undef SPDK_CONFIG_UNIT_TESTS 00:07:18.807 #undef SPDK_CONFIG_URING 00:07:18.807 #define SPDK_CONFIG_URING_PATH 00:07:18.807 #undef SPDK_CONFIG_URING_ZNS 00:07:18.807 #define SPDK_CONFIG_USDT 1 00:07:18.807 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:18.807 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:18.807 #define SPDK_CONFIG_VFIO_USER 1 00:07:18.807 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:18.807 #define SPDK_CONFIG_VHOST 1 00:07:18.807 #define SPDK_CONFIG_VIRTIO 1 00:07:18.807 #undef SPDK_CONFIG_VTUNE 00:07:18.807 #define SPDK_CONFIG_VTUNE_DIR 00:07:18.807 #define SPDK_CONFIG_WERROR 1 00:07:18.807 #define SPDK_CONFIG_WPDK_DIR 00:07:18.807 #undef SPDK_CONFIG_XNVME 00:07:18.807 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:18.807 03:50:52 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:18.807 03:50:52 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:18.807 03:50:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.807 03:50:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.807 03:50:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.807 03:50:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.807 03:50:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.807 03:50:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.807 03:50:52 -- paths/export.sh@5 -- # export PATH 00:07:18.807 03:50:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.807 03:50:52 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:18.807 03:50:52 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:18.807 03:50:52 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:18.807 03:50:52 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:18.807 03:50:52 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:07:18.807 03:50:52 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:07:18.807 03:50:52 -- pm/common@16 -- # TEST_TAG=N/A 00:07:18.807 03:50:52 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:07:18.807 03:50:52 -- common/autotest_common.sh@52 -- # : 1 00:07:18.807 03:50:52 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:07:18.807 03:50:52 -- common/autotest_common.sh@56 -- # : 0 00:07:18.807 03:50:52 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:18.808 03:50:52 -- common/autotest_common.sh@58 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:07:18.808 03:50:52 -- common/autotest_common.sh@60 -- # : 1 00:07:18.808 03:50:52 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:18.808 03:50:52 -- common/autotest_common.sh@62 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:07:18.808 03:50:52 -- common/autotest_common.sh@64 -- # : 00:07:18.808 03:50:52 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:07:18.808 03:50:52 -- common/autotest_common.sh@66 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:07:18.808 03:50:52 -- common/autotest_common.sh@68 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:07:18.808 03:50:52 -- common/autotest_common.sh@70 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:07:18.808 03:50:52 -- common/autotest_common.sh@72 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:18.808 03:50:52 -- common/autotest_common.sh@74 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:07:18.808 03:50:52 -- common/autotest_common.sh@76 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:07:18.808 03:50:52 -- common/autotest_common.sh@78 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:07:18.808 03:50:52 -- common/autotest_common.sh@80 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:07:18.808 03:50:52 -- common/autotest_common.sh@82 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:07:18.808 03:50:52 -- common/autotest_common.sh@84 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:07:18.808 03:50:52 -- common/autotest_common.sh@86 -- # : 1 00:07:18.808 03:50:52 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:07:18.808 03:50:52 -- common/autotest_common.sh@88 -- # : 1 00:07:18.808 03:50:52 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:07:18.808 03:50:52 -- common/autotest_common.sh@90 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:18.808 03:50:52 -- common/autotest_common.sh@92 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:07:18.808 03:50:52 -- common/autotest_common.sh@94 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:07:18.808 03:50:52 -- common/autotest_common.sh@96 -- # : tcp 00:07:18.808 03:50:52 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:18.808 03:50:52 -- common/autotest_common.sh@98 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:07:18.808 03:50:52 -- common/autotest_common.sh@100 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:07:18.808 03:50:52 -- common/autotest_common.sh@102 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:07:18.808 03:50:52 -- common/autotest_common.sh@104 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:07:18.808 03:50:52 -- common/autotest_common.sh@106 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:07:18.808 03:50:52 -- common/autotest_common.sh@108 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:07:18.808 03:50:52 -- common/autotest_common.sh@110 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:07:18.808 03:50:52 -- common/autotest_common.sh@112 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:18.808 03:50:52 -- common/autotest_common.sh@114 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:07:18.808 03:50:52 -- common/autotest_common.sh@116 -- # : 1 00:07:18.808 03:50:52 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:07:18.808 03:50:52 -- common/autotest_common.sh@118 -- # : 00:07:18.808 03:50:52 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:18.808 03:50:52 -- common/autotest_common.sh@120 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:07:18.808 03:50:52 -- common/autotest_common.sh@122 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:07:18.808 03:50:52 -- common/autotest_common.sh@124 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:07:18.808 03:50:52 -- common/autotest_common.sh@126 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:07:18.808 03:50:52 -- common/autotest_common.sh@128 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:07:18.808 03:50:52 -- common/autotest_common.sh@130 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:07:18.808 03:50:52 -- common/autotest_common.sh@132 -- # : 00:07:18.808 03:50:52 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:07:18.808 03:50:52 -- common/autotest_common.sh@134 -- # : true 00:07:18.808 03:50:52 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:07:18.808 03:50:52 -- common/autotest_common.sh@136 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:07:18.808 03:50:52 -- common/autotest_common.sh@138 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:07:18.808 03:50:52 -- common/autotest_common.sh@140 -- # : 1 00:07:18.808 03:50:52 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:07:18.808 03:50:52 -- common/autotest_common.sh@142 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:07:18.808 03:50:52 -- common/autotest_common.sh@144 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:07:18.808 03:50:52 -- common/autotest_common.sh@146 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:07:18.808 03:50:52 -- common/autotest_common.sh@148 -- # : 00:07:18.808 03:50:52 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:07:18.808 03:50:52 -- common/autotest_common.sh@150 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:07:18.808 03:50:52 -- common/autotest_common.sh@152 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:07:18.808 03:50:52 -- common/autotest_common.sh@154 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:07:18.808 03:50:52 -- common/autotest_common.sh@156 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:07:18.808 03:50:52 -- common/autotest_common.sh@158 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:07:18.808 03:50:52 -- common/autotest_common.sh@160 -- # : 0 00:07:18.808 03:50:52 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:07:18.808 03:50:52 -- common/autotest_common.sh@163 -- # : 00:07:18.808 03:50:52 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:07:18.808 03:50:52 -- common/autotest_common.sh@165 -- # : 1 00:07:18.808 03:50:52 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:07:18.808 03:50:52 -- common/autotest_common.sh@167 -- # : 1 00:07:18.808 03:50:52 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:18.808 03:50:52 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:18.808 03:50:52 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:18.808 03:50:52 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:07:18.808 03:50:52 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:07:18.808 03:50:52 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:18.808 03:50:52 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:18.808 03:50:52 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:18.808 03:50:52 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:18.808 03:50:52 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:18.808 03:50:52 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:18.808 03:50:52 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:18.808 03:50:52 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:18.808 03:50:52 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:18.808 03:50:52 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:07:18.808 03:50:52 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:18.808 03:50:52 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:18.808 03:50:52 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:18.808 03:50:52 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:18.809 03:50:52 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:18.809 03:50:52 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:07:18.809 03:50:52 -- common/autotest_common.sh@196 -- # cat 00:07:18.809 03:50:52 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:07:18.809 03:50:52 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:18.809 03:50:52 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:18.809 03:50:52 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:18.809 03:50:52 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:18.809 03:50:52 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:07:18.809 03:50:52 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:07:18.809 03:50:52 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:18.809 03:50:52 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:18.809 03:50:52 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:18.809 03:50:52 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:18.809 03:50:52 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:18.809 03:50:52 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:18.809 03:50:52 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:18.809 03:50:52 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:18.809 03:50:52 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:18.809 03:50:52 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:18.809 03:50:52 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:18.809 03:50:52 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:18.809 03:50:52 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:07:18.809 03:50:52 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:07:18.809 03:50:52 -- common/autotest_common.sh@249 -- # _LCOV= 00:07:18.809 03:50:52 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:07:18.809 03:50:52 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:07:18.809 03:50:52 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:18.809 03:50:52 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:07:18.809 03:50:52 -- common/autotest_common.sh@255 -- # lcov_opt= 00:07:18.809 03:50:52 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:07:18.809 03:50:52 -- common/autotest_common.sh@259 -- # export valgrind= 00:07:18.809 03:50:52 -- common/autotest_common.sh@259 -- # valgrind= 00:07:18.809 03:50:52 -- common/autotest_common.sh@265 -- # uname -s 00:07:18.809 03:50:52 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:07:18.809 03:50:52 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:07:18.809 03:50:52 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:07:18.809 03:50:52 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:07:18.809 03:50:52 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:07:18.809 03:50:52 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:07:18.809 03:50:52 -- common/autotest_common.sh@275 -- # MAKE=make 00:07:18.809 03:50:52 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j10 00:07:18.809 03:50:52 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:07:18.809 03:50:52 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:07:18.809 03:50:52 -- common/autotest_common.sh@294 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:07:18.809 03:50:52 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:07:18.809 03:50:52 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:07:18.809 03:50:52 -- common/autotest_common.sh@301 -- # for i in "$@" 00:07:18.809 03:50:52 -- common/autotest_common.sh@302 -- # case "$i" in 00:07:18.809 03:50:52 -- common/autotest_common.sh@307 -- # TEST_TRANSPORT=tcp 00:07:18.809 03:50:52 -- common/autotest_common.sh@319 -- # [[ -z 60418 ]] 00:07:18.809 03:50:52 -- common/autotest_common.sh@319 -- # kill -0 60418 00:07:18.809 03:50:52 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:07:18.809 03:50:52 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:07:18.809 03:50:52 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:07:18.809 03:50:52 -- common/autotest_common.sh@332 -- # local mount target_dir 00:07:18.809 03:50:52 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:07:18.809 03:50:52 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:07:18.809 03:50:52 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:07:18.809 03:50:52 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:07:18.809 03:50:52 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.92bKt8 00:07:18.809 03:50:52 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:18.809 03:50:52 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:07:18.809 03:50:52 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:07:18.809 03:50:52 -- common/autotest_common.sh@356 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.92bKt8/tests/target /tmp/spdk.92bKt8 00:07:18.809 03:50:52 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:07:18.809 03:50:52 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:18.809 03:50:52 -- common/autotest_common.sh@328 -- # df -T 00:07:18.809 03:50:52 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:07:18.809 03:50:52 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5 00:07:18.809 03:50:52 -- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs 00:07:18.809 03:50:52 -- common/autotest_common.sh@363 -- # avails["$mount"]=14017253376 00:07:18.809 03:50:52 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848 00:07:18.809 03:50:52 -- common/autotest_common.sh@364 -- # uses["$mount"]=5550567424 00:07:18.809 03:50:52 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:18.809 03:50:52 -- common/autotest_common.sh@362 -- # mounts["$mount"]=devtmpfs 00:07:18.809 03:50:52 -- common/autotest_common.sh@362 -- # fss["$mount"]=devtmpfs 00:07:18.809 03:50:52 -- common/autotest_common.sh@363 -- # avails["$mount"]=4194304 00:07:18.809 03:50:52 -- common/autotest_common.sh@363 -- # sizes["$mount"]=4194304 00:07:18.809 03:50:52 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:07:18.809 03:50:52 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:18.809 03:50:52 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:18.809 03:50:52 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:18.809 03:50:52 -- common/autotest_common.sh@363 -- # avails["$mount"]=6265163776 00:07:18.809 03:50:52 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6266421248 00:07:18.809 03:50:52 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:07:18.809 03:50:52 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:18.809 03:50:52 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:18.809 03:50:52 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:18.809 03:50:52 -- common/autotest_common.sh@363 -- # avails["$mount"]=2493755392 00:07:18.809 03:50:52 -- common/autotest_common.sh@363 -- # sizes["$mount"]=2506571776 00:07:18.809 03:50:52 -- common/autotest_common.sh@364 -- # uses["$mount"]=12816384 00:07:18.809 03:50:52 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:18.809 03:50:52 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5 00:07:18.809 03:50:52 -- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs 00:07:18.809 03:50:52 -- common/autotest_common.sh@363 -- # avails["$mount"]=14017253376 00:07:18.809 03:50:52 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848 00:07:18.809 03:50:52 -- common/autotest_common.sh@364 -- # uses["$mount"]=5550567424 00:07:18.809 03:50:52 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:18.809 03:50:52 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda2 00:07:18.809 03:50:52 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:07:18.809 03:50:52 -- common/autotest_common.sh@363 -- # avails["$mount"]=840085504 00:07:18.809 03:50:52 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1012768768 00:07:18.809 03:50:52 -- common/autotest_common.sh@364 -- # uses["$mount"]=103477248 00:07:18.809 03:50:52 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:18.809 03:50:52 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:18.809 03:50:52 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:18.809 03:50:52 -- common/autotest_common.sh@363 -- # avails["$mount"]=6266290176 00:07:18.809 03:50:52 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6266425344 00:07:18.809 03:50:52 -- common/autotest_common.sh@364 -- # uses["$mount"]=135168 00:07:18.809 03:50:52 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:18.809 03:50:52 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda3 00:07:18.809 03:50:52 -- common/autotest_common.sh@362 -- # fss["$mount"]=vfat 00:07:18.809 03:50:52 -- common/autotest_common.sh@363 -- # avails["$mount"]=91617280 00:07:18.809 03:50:52 -- common/autotest_common.sh@363 -- # sizes["$mount"]=104607744 00:07:18.809 03:50:52 -- common/autotest_common.sh@364 -- # uses["$mount"]=12990464 00:07:18.809 03:50:52 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:18.809 03:50:52 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:18.809 03:50:52 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:18.809 03:50:52 -- common/autotest_common.sh@363 -- # avails["$mount"]=1253269504 00:07:18.809 03:50:52 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1253281792 00:07:18.809 03:50:52 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:07:18.809 03:50:52 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:18.809 03:50:52 -- common/autotest_common.sh@362 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output 00:07:18.809 03:50:52 -- common/autotest_common.sh@362 -- # fss["$mount"]=fuse.sshfs 00:07:18.809 03:50:52 -- common/autotest_common.sh@363 -- # avails["$mount"]=98022088704 00:07:18.809 03:50:52 -- common/autotest_common.sh@363 -- # sizes["$mount"]=105088212992 00:07:18.809 03:50:52 -- common/autotest_common.sh@364 -- # uses["$mount"]=1680691200 00:07:18.809 03:50:52 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:18.810 03:50:52 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:07:18.810 * Looking for test storage... 00:07:18.810 03:50:52 -- common/autotest_common.sh@369 -- # local target_space new_size 00:07:18.810 03:50:52 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:07:18.810 03:50:52 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:18.810 03:50:52 -- common/autotest_common.sh@373 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:18.810 03:50:52 -- common/autotest_common.sh@373 -- # mount=/home 00:07:18.810 03:50:52 -- common/autotest_common.sh@375 -- # target_space=14017253376 00:07:18.810 03:50:52 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:07:18.810 03:50:52 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:07:18.810 03:50:52 -- common/autotest_common.sh@381 -- # [[ btrfs == tmpfs ]] 00:07:18.810 03:50:52 -- common/autotest_common.sh@381 -- # [[ btrfs == ramfs ]] 00:07:18.810 03:50:52 -- common/autotest_common.sh@381 -- # [[ /home == / ]] 00:07:18.810 03:50:52 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:18.810 03:50:52 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:18.810 03:50:52 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:18.810 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:18.810 03:50:52 -- common/autotest_common.sh@390 -- # return 0 00:07:18.810 03:50:52 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:07:18.810 03:50:52 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:07:18.810 03:50:52 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:18.810 03:50:52 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:18.810 03:50:52 -- common/autotest_common.sh@1682 -- # true 00:07:18.810 03:50:52 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:07:18.810 03:50:52 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:18.810 03:50:52 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:18.810 03:50:52 -- common/autotest_common.sh@27 -- # exec 00:07:18.810 03:50:52 -- common/autotest_common.sh@29 -- # exec 00:07:18.810 03:50:52 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:18.810 03:50:52 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:18.810 03:50:52 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:18.810 03:50:52 -- common/autotest_common.sh@18 -- # set -x 00:07:18.810 03:50:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:18.810 03:50:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:18.810 03:50:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:18.810 03:50:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:18.810 03:50:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:18.810 03:50:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:18.810 03:50:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:18.810 03:50:52 -- scripts/common.sh@335 -- # IFS=.-: 00:07:18.810 03:50:52 -- scripts/common.sh@335 -- # read -ra ver1 00:07:18.810 03:50:52 -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.810 03:50:52 -- scripts/common.sh@336 -- # read -ra ver2 00:07:18.810 03:50:52 -- scripts/common.sh@337 -- # local 'op=<' 00:07:18.810 03:50:52 -- scripts/common.sh@339 -- # ver1_l=2 00:07:18.810 03:50:52 -- scripts/common.sh@340 -- # ver2_l=1 00:07:18.810 03:50:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:18.810 03:50:52 -- scripts/common.sh@343 -- # case "$op" in 00:07:18.810 03:50:52 -- scripts/common.sh@344 -- # : 1 00:07:18.810 03:50:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:18.810 03:50:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.810 03:50:52 -- scripts/common.sh@364 -- # decimal 1 00:07:18.810 03:50:52 -- scripts/common.sh@352 -- # local d=1 00:07:18.810 03:50:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.810 03:50:52 -- scripts/common.sh@354 -- # echo 1 00:07:18.810 03:50:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:18.810 03:50:52 -- scripts/common.sh@365 -- # decimal 2 00:07:18.810 03:50:52 -- scripts/common.sh@352 -- # local d=2 00:07:18.810 03:50:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.810 03:50:52 -- scripts/common.sh@354 -- # echo 2 00:07:18.810 03:50:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:18.810 03:50:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:18.810 03:50:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:18.810 03:50:52 -- scripts/common.sh@367 -- # return 0 00:07:18.810 03:50:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.810 03:50:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:18.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.810 --rc genhtml_branch_coverage=1 00:07:18.810 --rc genhtml_function_coverage=1 00:07:18.810 --rc genhtml_legend=1 00:07:18.810 --rc geninfo_all_blocks=1 00:07:18.810 --rc geninfo_unexecuted_blocks=1 00:07:18.810 00:07:18.810 ' 00:07:18.810 03:50:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:18.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.810 --rc genhtml_branch_coverage=1 00:07:18.810 --rc genhtml_function_coverage=1 00:07:18.810 --rc genhtml_legend=1 00:07:18.810 --rc geninfo_all_blocks=1 00:07:18.810 --rc geninfo_unexecuted_blocks=1 00:07:18.810 00:07:18.810 ' 00:07:18.810 03:50:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:18.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.810 --rc genhtml_branch_coverage=1 00:07:18.810 --rc genhtml_function_coverage=1 00:07:18.810 --rc genhtml_legend=1 00:07:18.810 --rc geninfo_all_blocks=1 00:07:18.810 --rc geninfo_unexecuted_blocks=1 00:07:18.810 00:07:18.810 ' 00:07:18.810 03:50:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:18.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.810 --rc genhtml_branch_coverage=1 00:07:18.810 --rc genhtml_function_coverage=1 00:07:18.810 --rc genhtml_legend=1 00:07:18.810 --rc geninfo_all_blocks=1 00:07:18.810 --rc geninfo_unexecuted_blocks=1 00:07:18.810 00:07:18.810 ' 00:07:18.810 03:50:52 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:18.810 03:50:52 -- nvmf/common.sh@7 -- # uname -s 00:07:18.810 03:50:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:18.810 03:50:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:18.810 03:50:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:18.810 03:50:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:18.810 03:50:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:18.810 03:50:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:18.810 03:50:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:18.810 03:50:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:18.810 03:50:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:18.810 03:50:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:18.810 03:50:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:07:18.810 03:50:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:07:18.810 03:50:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:18.810 03:50:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:18.810 03:50:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:18.810 03:50:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:18.810 03:50:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.810 03:50:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.810 03:50:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.811 03:50:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.811 03:50:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.811 03:50:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.811 03:50:52 -- paths/export.sh@5 -- # export PATH 00:07:18.811 03:50:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.811 03:50:52 -- nvmf/common.sh@46 -- # : 0 00:07:18.811 03:50:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:18.811 03:50:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:18.811 03:50:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:18.811 03:50:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:18.811 03:50:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:18.811 03:50:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:18.811 03:50:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:18.811 03:50:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:18.811 03:50:52 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:18.811 03:50:52 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:18.811 03:50:52 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:18.811 03:50:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:18.811 03:50:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:18.811 03:50:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:18.811 03:50:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:18.811 03:50:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:18.811 03:50:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.811 03:50:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:18.811 03:50:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.811 03:50:52 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:18.811 03:50:52 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:18.811 03:50:52 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:18.811 03:50:52 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:18.811 03:50:52 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:18.811 03:50:52 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:18.811 03:50:52 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:18.811 03:50:52 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:18.811 03:50:52 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:18.811 03:50:52 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:18.811 03:50:52 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:18.811 03:50:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:18.811 03:50:52 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:18.811 03:50:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:18.811 03:50:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:18.811 03:50:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:18.811 03:50:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:18.811 03:50:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:18.811 03:50:52 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:18.811 03:50:52 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:18.811 Cannot find device "nvmf_tgt_br" 00:07:18.811 03:50:52 -- nvmf/common.sh@154 -- # true 00:07:18.811 03:50:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:18.811 Cannot find device "nvmf_tgt_br2" 00:07:18.811 03:50:52 -- nvmf/common.sh@155 -- # true 00:07:18.811 03:50:52 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:18.811 03:50:52 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:18.811 Cannot find device "nvmf_tgt_br" 00:07:18.811 03:50:52 -- nvmf/common.sh@157 -- # true 00:07:18.811 03:50:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:18.811 Cannot find device "nvmf_tgt_br2" 00:07:18.811 03:50:52 -- nvmf/common.sh@158 -- # true 00:07:18.811 03:50:52 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:18.811 03:50:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:18.811 03:50:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:18.811 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:18.811 03:50:52 -- nvmf/common.sh@161 -- # true 00:07:18.811 03:50:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:18.811 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:18.811 03:50:52 -- nvmf/common.sh@162 -- # true 00:07:18.811 03:50:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:18.811 03:50:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:18.811 03:50:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:18.811 03:50:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:18.811 03:50:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:18.811 03:50:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:18.811 03:50:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:18.811 03:50:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:18.811 03:50:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:18.811 03:50:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:18.811 03:50:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:18.811 03:50:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:18.811 03:50:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:18.811 03:50:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:18.811 03:50:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:18.811 03:50:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:18.811 03:50:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:18.811 03:50:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:18.811 03:50:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:18.811 03:50:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:18.811 03:50:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:18.811 03:50:53 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:18.811 03:50:53 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:18.811 03:50:53 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:18.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:18.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:07:18.811 00:07:18.811 --- 10.0.0.2 ping statistics --- 00:07:18.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.811 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:07:18.811 03:50:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:18.811 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:18.811 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:07:18.811 00:07:18.811 --- 10.0.0.3 ping statistics --- 00:07:18.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.811 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:07:18.811 03:50:53 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:18.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:18.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:07:18.811 00:07:18.811 --- 10.0.0.1 ping statistics --- 00:07:18.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.811 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:07:18.811 03:50:53 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:18.811 03:50:53 -- nvmf/common.sh@421 -- # return 0 00:07:18.811 03:50:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:18.811 03:50:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:18.811 03:50:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:18.811 03:50:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:18.811 03:50:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:18.811 03:50:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:18.811 03:50:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:18.811 03:50:53 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:18.811 03:50:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:18.811 03:50:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.811 03:50:53 -- common/autotest_common.sh@10 -- # set +x 00:07:18.811 ************************************ 00:07:18.811 START TEST nvmf_filesystem_no_in_capsule 00:07:18.811 ************************************ 00:07:18.812 03:50:53 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 0 00:07:18.812 03:50:53 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:18.812 03:50:53 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:18.812 03:50:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:18.812 03:50:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:18.812 03:50:53 -- common/autotest_common.sh@10 -- # set +x 00:07:18.812 03:50:53 -- nvmf/common.sh@469 -- # nvmfpid=60593 00:07:18.812 03:50:53 -- nvmf/common.sh@470 -- # waitforlisten 60593 00:07:18.812 03:50:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:18.812 03:50:53 -- common/autotest_common.sh@829 -- # '[' -z 60593 ']' 00:07:18.812 03:50:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.812 03:50:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:18.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.812 03:50:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.812 03:50:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:18.812 03:50:53 -- common/autotest_common.sh@10 -- # set +x 00:07:18.812 [2024-11-08 03:50:53.124018] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:18.812 [2024-11-08 03:50:53.124126] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.812 [2024-11-08 03:50:53.256555] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:18.812 [2024-11-08 03:50:53.345505] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:18.812 [2024-11-08 03:50:53.345659] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:18.812 [2024-11-08 03:50:53.345672] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:18.812 [2024-11-08 03:50:53.345680] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:18.812 [2024-11-08 03:50:53.345944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.812 [2024-11-08 03:50:53.346040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.812 [2024-11-08 03:50:53.346283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.812 [2024-11-08 03:50:53.346296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.070 03:50:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:19.070 03:50:54 -- common/autotest_common.sh@862 -- # return 0 00:07:19.070 03:50:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:19.070 03:50:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:19.070 03:50:54 -- common/autotest_common.sh@10 -- # set +x 00:07:19.070 03:50:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:19.070 03:50:54 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:19.070 03:50:54 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:19.070 03:50:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.070 03:50:54 -- common/autotest_common.sh@10 -- # set +x 00:07:19.070 [2024-11-08 03:50:54.120905] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:19.070 03:50:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.070 03:50:54 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:19.070 03:50:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.070 03:50:54 -- common/autotest_common.sh@10 -- # set +x 00:07:19.329 Malloc1 00:07:19.329 03:50:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.329 03:50:54 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:19.329 03:50:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.329 03:50:54 -- common/autotest_common.sh@10 -- # set +x 00:07:19.329 03:50:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.329 03:50:54 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:19.329 03:50:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.329 03:50:54 -- common/autotest_common.sh@10 -- # set +x 00:07:19.329 03:50:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.330 03:50:54 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:19.330 03:50:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.330 03:50:54 -- common/autotest_common.sh@10 -- # set +x 00:07:19.330 [2024-11-08 03:50:54.371914] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:19.330 03:50:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.330 03:50:54 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:19.330 03:50:54 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:07:19.330 03:50:54 -- common/autotest_common.sh@1368 -- # local bdev_info 00:07:19.330 03:50:54 -- common/autotest_common.sh@1369 -- # local bs 00:07:19.330 03:50:54 -- common/autotest_common.sh@1370 -- # local nb 00:07:19.330 03:50:54 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:19.330 03:50:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.330 03:50:54 -- common/autotest_common.sh@10 -- # set +x 00:07:19.330 03:50:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.330 03:50:54 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:07:19.330 { 00:07:19.330 "aliases": [ 00:07:19.330 "fec3a9e2-8bb8-4f26-a530-ffb3bcb19fab" 00:07:19.330 ], 00:07:19.330 "assigned_rate_limits": { 00:07:19.330 "r_mbytes_per_sec": 0, 00:07:19.330 "rw_ios_per_sec": 0, 00:07:19.330 "rw_mbytes_per_sec": 0, 00:07:19.330 "w_mbytes_per_sec": 0 00:07:19.330 }, 00:07:19.330 "block_size": 512, 00:07:19.330 "claim_type": "exclusive_write", 00:07:19.330 "claimed": true, 00:07:19.330 "driver_specific": {}, 00:07:19.330 "memory_domains": [ 00:07:19.330 { 00:07:19.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.330 "dma_device_type": 2 00:07:19.330 } 00:07:19.330 ], 00:07:19.330 "name": "Malloc1", 00:07:19.330 "num_blocks": 1048576, 00:07:19.330 "product_name": "Malloc disk", 00:07:19.330 "supported_io_types": { 00:07:19.330 "abort": true, 00:07:19.330 "compare": false, 00:07:19.330 "compare_and_write": false, 00:07:19.330 "flush": true, 00:07:19.330 "nvme_admin": false, 00:07:19.330 "nvme_io": false, 00:07:19.330 "read": true, 00:07:19.330 "reset": true, 00:07:19.330 "unmap": true, 00:07:19.330 "write": true, 00:07:19.330 "write_zeroes": true 00:07:19.330 }, 00:07:19.330 "uuid": "fec3a9e2-8bb8-4f26-a530-ffb3bcb19fab", 00:07:19.330 "zoned": false 00:07:19.330 } 00:07:19.330 ]' 00:07:19.330 03:50:54 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:07:19.330 03:50:54 -- common/autotest_common.sh@1372 -- # bs=512 00:07:19.330 03:50:54 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:07:19.589 03:50:54 -- common/autotest_common.sh@1373 -- # nb=1048576 00:07:19.589 03:50:54 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:07:19.589 03:50:54 -- common/autotest_common.sh@1377 -- # echo 512 00:07:19.589 03:50:54 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:19.589 03:50:54 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:19.589 03:50:54 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:19.589 03:50:54 -- common/autotest_common.sh@1187 -- # local i=0 00:07:19.589 03:50:54 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:07:19.589 03:50:54 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:07:19.589 03:50:54 -- common/autotest_common.sh@1194 -- # sleep 2 00:07:22.123 03:50:56 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:07:22.123 03:50:56 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:07:22.123 03:50:56 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:07:22.123 03:50:56 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:07:22.123 03:50:56 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:07:22.123 03:50:56 -- common/autotest_common.sh@1197 -- # return 0 00:07:22.123 03:50:56 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:22.123 03:50:56 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:22.123 03:50:56 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:22.123 03:50:56 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:22.123 03:50:56 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:22.123 03:50:56 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:22.123 03:50:56 -- setup/common.sh@80 -- # echo 536870912 00:07:22.123 03:50:56 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:22.123 03:50:56 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:22.123 03:50:56 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:22.123 03:50:56 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:22.123 03:50:56 -- target/filesystem.sh@69 -- # partprobe 00:07:22.123 03:50:56 -- target/filesystem.sh@70 -- # sleep 1 00:07:23.082 03:50:57 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:23.082 03:50:57 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:23.082 03:50:57 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:23.082 03:50:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.082 03:50:57 -- common/autotest_common.sh@10 -- # set +x 00:07:23.082 ************************************ 00:07:23.082 START TEST filesystem_ext4 00:07:23.082 ************************************ 00:07:23.082 03:50:57 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:23.082 03:50:57 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:23.082 03:50:57 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:23.082 03:50:57 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:23.082 03:50:57 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:23.082 03:50:57 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:23.082 03:50:57 -- common/autotest_common.sh@914 -- # local i=0 00:07:23.082 03:50:57 -- common/autotest_common.sh@915 -- # local force 00:07:23.082 03:50:57 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:23.082 03:50:57 -- common/autotest_common.sh@918 -- # force=-F 00:07:23.082 03:50:57 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:23.082 mke2fs 1.47.0 (5-Feb-2023) 00:07:23.082 Discarding device blocks: 0/522240 done 00:07:23.082 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:23.082 Filesystem UUID: 781a5c0a-b245-46e5-94c3-e28779e0a30c 00:07:23.082 Superblock backups stored on blocks: 00:07:23.082 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:23.082 00:07:23.082 Allocating group tables: 0/64 done 00:07:23.082 Writing inode tables: 0/64 done 00:07:23.082 Creating journal (8192 blocks): done 00:07:23.082 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:07:23.082 00:07:23.082 03:50:58 -- common/autotest_common.sh@931 -- # return 0 00:07:23.082 03:50:58 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:28.351 03:51:03 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:28.351 03:51:03 -- target/filesystem.sh@25 -- # sync 00:07:28.610 03:51:03 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:28.610 03:51:03 -- target/filesystem.sh@27 -- # sync 00:07:28.610 03:51:03 -- target/filesystem.sh@29 -- # i=0 00:07:28.610 03:51:03 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:28.610 03:51:03 -- target/filesystem.sh@37 -- # kill -0 60593 00:07:28.610 03:51:03 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:28.610 03:51:03 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:28.610 03:51:03 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:28.610 03:51:03 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:28.610 00:07:28.610 real 0m5.656s 00:07:28.610 user 0m0.026s 00:07:28.610 sys 0m0.067s 00:07:28.610 03:51:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:28.610 03:51:03 -- common/autotest_common.sh@10 -- # set +x 00:07:28.610 ************************************ 00:07:28.610 END TEST filesystem_ext4 00:07:28.610 ************************************ 00:07:28.610 03:51:03 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:28.610 03:51:03 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:28.610 03:51:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.610 03:51:03 -- common/autotest_common.sh@10 -- # set +x 00:07:28.610 ************************************ 00:07:28.610 START TEST filesystem_btrfs 00:07:28.610 ************************************ 00:07:28.610 03:51:03 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:28.610 03:51:03 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:28.610 03:51:03 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:28.610 03:51:03 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:28.610 03:51:03 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:28.610 03:51:03 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:28.610 03:51:03 -- common/autotest_common.sh@914 -- # local i=0 00:07:28.610 03:51:03 -- common/autotest_common.sh@915 -- # local force 00:07:28.610 03:51:03 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:28.610 03:51:03 -- common/autotest_common.sh@920 -- # force=-f 00:07:28.610 03:51:03 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:28.869 btrfs-progs v6.8.1 00:07:28.869 See https://btrfs.readthedocs.io for more information. 00:07:28.869 00:07:28.869 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:28.869 NOTE: several default settings have changed in version 5.15, please make sure 00:07:28.869 this does not affect your deployments: 00:07:28.869 - DUP for metadata (-m dup) 00:07:28.869 - enabled no-holes (-O no-holes) 00:07:28.869 - enabled free-space-tree (-R free-space-tree) 00:07:28.869 00:07:28.869 Label: (null) 00:07:28.869 UUID: 7d6ac67b-1c78-4345-a240-80619da188a9 00:07:28.869 Node size: 16384 00:07:28.869 Sector size: 4096 (CPU page size: 4096) 00:07:28.869 Filesystem size: 510.00MiB 00:07:28.869 Block group profiles: 00:07:28.869 Data: single 8.00MiB 00:07:28.869 Metadata: DUP 32.00MiB 00:07:28.869 System: DUP 8.00MiB 00:07:28.869 SSD detected: yes 00:07:28.869 Zoned device: no 00:07:28.869 Features: extref, skinny-metadata, no-holes, free-space-tree 00:07:28.869 Checksum: crc32c 00:07:28.869 Number of devices: 1 00:07:28.869 Devices: 00:07:28.869 ID SIZE PATH 00:07:28.869 1 510.00MiB /dev/nvme0n1p1 00:07:28.869 00:07:28.869 03:51:03 -- common/autotest_common.sh@931 -- # return 0 00:07:28.869 03:51:03 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:28.869 03:51:03 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:28.869 03:51:03 -- target/filesystem.sh@25 -- # sync 00:07:28.869 03:51:03 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:28.869 03:51:03 -- target/filesystem.sh@27 -- # sync 00:07:28.869 03:51:03 -- target/filesystem.sh@29 -- # i=0 00:07:28.869 03:51:03 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:28.869 03:51:03 -- target/filesystem.sh@37 -- # kill -0 60593 00:07:28.869 03:51:03 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:28.869 03:51:03 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:28.869 03:51:03 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:28.869 03:51:03 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:28.869 00:07:28.869 real 0m0.292s 00:07:28.869 user 0m0.024s 00:07:28.869 sys 0m0.066s 00:07:28.869 03:51:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:28.869 ************************************ 00:07:28.869 END TEST filesystem_btrfs 00:07:28.869 ************************************ 00:07:28.869 03:51:03 -- common/autotest_common.sh@10 -- # set +x 00:07:28.869 03:51:03 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:28.869 03:51:03 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:28.869 03:51:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.869 03:51:03 -- common/autotest_common.sh@10 -- # set +x 00:07:28.869 ************************************ 00:07:28.869 START TEST filesystem_xfs 00:07:28.869 ************************************ 00:07:28.869 03:51:03 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:07:28.869 03:51:03 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:28.869 03:51:03 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:28.869 03:51:03 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:28.869 03:51:03 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:28.869 03:51:03 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:28.870 03:51:03 -- common/autotest_common.sh@914 -- # local i=0 00:07:28.870 03:51:03 -- common/autotest_common.sh@915 -- # local force 00:07:28.870 03:51:03 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:28.870 03:51:03 -- common/autotest_common.sh@920 -- # force=-f 00:07:28.870 03:51:03 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:29.128 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:29.128 = sectsz=512 attr=2, projid32bit=1 00:07:29.128 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:29.128 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:29.128 data = bsize=4096 blocks=130560, imaxpct=25 00:07:29.128 = sunit=0 swidth=0 blks 00:07:29.128 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:29.128 log =internal log bsize=4096 blocks=16384, version=2 00:07:29.128 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:29.128 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:29.695 Discarding blocks...Done. 00:07:29.695 03:51:04 -- common/autotest_common.sh@931 -- # return 0 00:07:29.695 03:51:04 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:32.226 03:51:07 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:32.226 03:51:07 -- target/filesystem.sh@25 -- # sync 00:07:32.226 03:51:07 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:32.226 03:51:07 -- target/filesystem.sh@27 -- # sync 00:07:32.226 03:51:07 -- target/filesystem.sh@29 -- # i=0 00:07:32.226 03:51:07 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:32.226 03:51:07 -- target/filesystem.sh@37 -- # kill -0 60593 00:07:32.226 03:51:07 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:32.226 03:51:07 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:32.226 03:51:07 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:32.226 03:51:07 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:32.226 ************************************ 00:07:32.226 END TEST filesystem_xfs 00:07:32.226 ************************************ 00:07:32.226 00:07:32.226 real 0m3.254s 00:07:32.226 user 0m0.020s 00:07:32.226 sys 0m0.060s 00:07:32.226 03:51:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:32.226 03:51:07 -- common/autotest_common.sh@10 -- # set +x 00:07:32.226 03:51:07 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:32.226 03:51:07 -- target/filesystem.sh@93 -- # sync 00:07:32.226 03:51:07 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:32.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:32.226 03:51:07 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:32.226 03:51:07 -- common/autotest_common.sh@1208 -- # local i=0 00:07:32.226 03:51:07 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:07:32.226 03:51:07 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:32.226 03:51:07 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:32.226 03:51:07 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:07:32.226 03:51:07 -- common/autotest_common.sh@1220 -- # return 0 00:07:32.226 03:51:07 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:32.226 03:51:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.226 03:51:07 -- common/autotest_common.sh@10 -- # set +x 00:07:32.226 03:51:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.226 03:51:07 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:32.226 03:51:07 -- target/filesystem.sh@101 -- # killprocess 60593 00:07:32.226 03:51:07 -- common/autotest_common.sh@936 -- # '[' -z 60593 ']' 00:07:32.226 03:51:07 -- common/autotest_common.sh@940 -- # kill -0 60593 00:07:32.226 03:51:07 -- common/autotest_common.sh@941 -- # uname 00:07:32.226 03:51:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:32.226 03:51:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60593 00:07:32.485 killing process with pid 60593 00:07:32.485 03:51:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:32.485 03:51:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:32.485 03:51:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60593' 00:07:32.485 03:51:07 -- common/autotest_common.sh@955 -- # kill 60593 00:07:32.485 03:51:07 -- common/autotest_common.sh@960 -- # wait 60593 00:07:32.775 ************************************ 00:07:32.775 END TEST nvmf_filesystem_no_in_capsule 00:07:32.775 ************************************ 00:07:32.775 03:51:07 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:32.775 00:07:32.775 real 0m14.742s 00:07:32.775 user 0m56.619s 00:07:32.775 sys 0m1.747s 00:07:32.775 03:51:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:32.775 03:51:07 -- common/autotest_common.sh@10 -- # set +x 00:07:32.775 03:51:07 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:32.775 03:51:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:32.775 03:51:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:32.775 03:51:07 -- common/autotest_common.sh@10 -- # set +x 00:07:32.775 ************************************ 00:07:32.775 START TEST nvmf_filesystem_in_capsule 00:07:32.775 ************************************ 00:07:32.775 03:51:07 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 4096 00:07:32.775 03:51:07 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:32.775 03:51:07 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:32.775 03:51:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:32.775 03:51:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:32.775 03:51:07 -- common/autotest_common.sh@10 -- # set +x 00:07:32.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.775 03:51:07 -- nvmf/common.sh@469 -- # nvmfpid=60964 00:07:32.775 03:51:07 -- nvmf/common.sh@470 -- # waitforlisten 60964 00:07:32.775 03:51:07 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:32.775 03:51:07 -- common/autotest_common.sh@829 -- # '[' -z 60964 ']' 00:07:32.775 03:51:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.775 03:51:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:32.775 03:51:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.775 03:51:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:32.775 03:51:07 -- common/autotest_common.sh@10 -- # set +x 00:07:33.034 [2024-11-08 03:51:07.902442] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:33.034 [2024-11-08 03:51:07.902545] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:33.034 [2024-11-08 03:51:08.036988] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:33.034 [2024-11-08 03:51:08.129059] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:33.034 [2024-11-08 03:51:08.129390] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:33.034 [2024-11-08 03:51:08.129595] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:33.034 [2024-11-08 03:51:08.129623] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:33.034 [2024-11-08 03:51:08.129752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.034 [2024-11-08 03:51:08.130025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.034 [2024-11-08 03:51:08.130244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:33.034 [2024-11-08 03:51:08.130253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.970 03:51:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:33.970 03:51:08 -- common/autotest_common.sh@862 -- # return 0 00:07:33.970 03:51:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:33.970 03:51:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:33.970 03:51:08 -- common/autotest_common.sh@10 -- # set +x 00:07:33.970 03:51:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:33.970 03:51:08 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:33.970 03:51:08 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:33.970 03:51:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.970 03:51:08 -- common/autotest_common.sh@10 -- # set +x 00:07:33.970 [2024-11-08 03:51:08.878120] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:33.970 03:51:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.970 03:51:08 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:33.970 03:51:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.970 03:51:08 -- common/autotest_common.sh@10 -- # set +x 00:07:33.970 Malloc1 00:07:33.970 03:51:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.970 03:51:09 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:33.970 03:51:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.970 03:51:09 -- common/autotest_common.sh@10 -- # set +x 00:07:33.970 03:51:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.970 03:51:09 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:33.970 03:51:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.970 03:51:09 -- common/autotest_common.sh@10 -- # set +x 00:07:33.970 03:51:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.970 03:51:09 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:33.970 03:51:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.970 03:51:09 -- common/autotest_common.sh@10 -- # set +x 00:07:33.970 [2024-11-08 03:51:09.067349] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:33.970 03:51:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.970 03:51:09 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:33.970 03:51:09 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:07:33.970 03:51:09 -- common/autotest_common.sh@1368 -- # local bdev_info 00:07:33.970 03:51:09 -- common/autotest_common.sh@1369 -- # local bs 00:07:33.970 03:51:09 -- common/autotest_common.sh@1370 -- # local nb 00:07:33.970 03:51:09 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:33.970 03:51:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.970 03:51:09 -- common/autotest_common.sh@10 -- # set +x 00:07:34.229 03:51:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.229 03:51:09 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:07:34.229 { 00:07:34.229 "aliases": [ 00:07:34.229 "02deab76-bc8b-4684-a9cc-2129ed7d9181" 00:07:34.229 ], 00:07:34.229 "assigned_rate_limits": { 00:07:34.229 "r_mbytes_per_sec": 0, 00:07:34.229 "rw_ios_per_sec": 0, 00:07:34.229 "rw_mbytes_per_sec": 0, 00:07:34.229 "w_mbytes_per_sec": 0 00:07:34.229 }, 00:07:34.229 "block_size": 512, 00:07:34.229 "claim_type": "exclusive_write", 00:07:34.229 "claimed": true, 00:07:34.229 "driver_specific": {}, 00:07:34.229 "memory_domains": [ 00:07:34.229 { 00:07:34.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.229 "dma_device_type": 2 00:07:34.229 } 00:07:34.229 ], 00:07:34.229 "name": "Malloc1", 00:07:34.229 "num_blocks": 1048576, 00:07:34.229 "product_name": "Malloc disk", 00:07:34.229 "supported_io_types": { 00:07:34.229 "abort": true, 00:07:34.229 "compare": false, 00:07:34.229 "compare_and_write": false, 00:07:34.229 "flush": true, 00:07:34.229 "nvme_admin": false, 00:07:34.229 "nvme_io": false, 00:07:34.229 "read": true, 00:07:34.229 "reset": true, 00:07:34.229 "unmap": true, 00:07:34.229 "write": true, 00:07:34.229 "write_zeroes": true 00:07:34.229 }, 00:07:34.229 "uuid": "02deab76-bc8b-4684-a9cc-2129ed7d9181", 00:07:34.229 "zoned": false 00:07:34.229 } 00:07:34.229 ]' 00:07:34.229 03:51:09 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:07:34.229 03:51:09 -- common/autotest_common.sh@1372 -- # bs=512 00:07:34.229 03:51:09 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:07:34.229 03:51:09 -- common/autotest_common.sh@1373 -- # nb=1048576 00:07:34.229 03:51:09 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:07:34.229 03:51:09 -- common/autotest_common.sh@1377 -- # echo 512 00:07:34.229 03:51:09 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:34.229 03:51:09 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:34.487 03:51:09 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:34.487 03:51:09 -- common/autotest_common.sh@1187 -- # local i=0 00:07:34.487 03:51:09 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:07:34.487 03:51:09 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:07:34.488 03:51:09 -- common/autotest_common.sh@1194 -- # sleep 2 00:07:36.388 03:51:11 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:07:36.388 03:51:11 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:07:36.388 03:51:11 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:07:36.388 03:51:11 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:07:36.388 03:51:11 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:07:36.388 03:51:11 -- common/autotest_common.sh@1197 -- # return 0 00:07:36.388 03:51:11 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:36.388 03:51:11 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:36.388 03:51:11 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:36.388 03:51:11 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:36.388 03:51:11 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:36.388 03:51:11 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:36.388 03:51:11 -- setup/common.sh@80 -- # echo 536870912 00:07:36.388 03:51:11 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:36.388 03:51:11 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:36.388 03:51:11 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:36.388 03:51:11 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:36.388 03:51:11 -- target/filesystem.sh@69 -- # partprobe 00:07:36.647 03:51:11 -- target/filesystem.sh@70 -- # sleep 1 00:07:37.582 03:51:12 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:37.582 03:51:12 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:37.582 03:51:12 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:37.582 03:51:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:37.582 03:51:12 -- common/autotest_common.sh@10 -- # set +x 00:07:37.582 ************************************ 00:07:37.582 START TEST filesystem_in_capsule_ext4 00:07:37.582 ************************************ 00:07:37.582 03:51:12 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:37.582 03:51:12 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:37.582 03:51:12 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:37.582 03:51:12 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:37.582 03:51:12 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:37.582 03:51:12 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:37.582 03:51:12 -- common/autotest_common.sh@914 -- # local i=0 00:07:37.582 03:51:12 -- common/autotest_common.sh@915 -- # local force 00:07:37.582 03:51:12 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:37.582 03:51:12 -- common/autotest_common.sh@918 -- # force=-F 00:07:37.582 03:51:12 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:37.582 mke2fs 1.47.0 (5-Feb-2023) 00:07:37.582 Discarding device blocks: 0/522240 done 00:07:37.583 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:37.583 Filesystem UUID: 2402019b-20a9-433d-bcfe-0d276acecce4 00:07:37.583 Superblock backups stored on blocks: 00:07:37.583 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:37.583 00:07:37.583 Allocating group tables: 0/64 done 00:07:37.583 Writing inode tables: 0/64 done 00:07:37.583 Creating journal (8192 blocks): done 00:07:37.583 Writing superblocks and filesystem accounting information: 0/64 done 00:07:37.583 00:07:37.583 03:51:12 -- common/autotest_common.sh@931 -- # return 0 00:07:37.583 03:51:12 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:44.190 03:51:17 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:44.190 03:51:18 -- target/filesystem.sh@25 -- # sync 00:07:44.190 03:51:18 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:44.190 03:51:18 -- target/filesystem.sh@27 -- # sync 00:07:44.190 03:51:18 -- target/filesystem.sh@29 -- # i=0 00:07:44.190 03:51:18 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:44.190 03:51:18 -- target/filesystem.sh@37 -- # kill -0 60964 00:07:44.190 03:51:18 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:44.190 03:51:18 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:44.190 03:51:18 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:44.190 03:51:18 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:44.190 ************************************ 00:07:44.190 END TEST filesystem_in_capsule_ext4 00:07:44.190 ************************************ 00:07:44.190 00:07:44.190 real 0m5.558s 00:07:44.190 user 0m0.024s 00:07:44.190 sys 0m0.064s 00:07:44.190 03:51:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:44.190 03:51:18 -- common/autotest_common.sh@10 -- # set +x 00:07:44.190 03:51:18 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:44.190 03:51:18 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:44.190 03:51:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:44.190 03:51:18 -- common/autotest_common.sh@10 -- # set +x 00:07:44.190 ************************************ 00:07:44.190 START TEST filesystem_in_capsule_btrfs 00:07:44.190 ************************************ 00:07:44.190 03:51:18 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:44.190 03:51:18 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:44.190 03:51:18 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:44.190 03:51:18 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:44.190 03:51:18 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:44.190 03:51:18 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:44.190 03:51:18 -- common/autotest_common.sh@914 -- # local i=0 00:07:44.190 03:51:18 -- common/autotest_common.sh@915 -- # local force 00:07:44.190 03:51:18 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:44.190 03:51:18 -- common/autotest_common.sh@920 -- # force=-f 00:07:44.190 03:51:18 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:44.190 btrfs-progs v6.8.1 00:07:44.190 See https://btrfs.readthedocs.io for more information. 00:07:44.190 00:07:44.190 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:44.190 NOTE: several default settings have changed in version 5.15, please make sure 00:07:44.190 this does not affect your deployments: 00:07:44.190 - DUP for metadata (-m dup) 00:07:44.190 - enabled no-holes (-O no-holes) 00:07:44.190 - enabled free-space-tree (-R free-space-tree) 00:07:44.190 00:07:44.190 Label: (null) 00:07:44.190 UUID: 4dc1deaa-ce08-408b-96b9-3b6b5b29f354 00:07:44.190 Node size: 16384 00:07:44.190 Sector size: 4096 (CPU page size: 4096) 00:07:44.190 Filesystem size: 510.00MiB 00:07:44.190 Block group profiles: 00:07:44.190 Data: single 8.00MiB 00:07:44.190 Metadata: DUP 32.00MiB 00:07:44.190 System: DUP 8.00MiB 00:07:44.190 SSD detected: yes 00:07:44.190 Zoned device: no 00:07:44.190 Features: extref, skinny-metadata, no-holes, free-space-tree 00:07:44.190 Checksum: crc32c 00:07:44.190 Number of devices: 1 00:07:44.190 Devices: 00:07:44.190 ID SIZE PATH 00:07:44.190 1 510.00MiB /dev/nvme0n1p1 00:07:44.190 00:07:44.190 03:51:18 -- common/autotest_common.sh@931 -- # return 0 00:07:44.190 03:51:18 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:44.190 03:51:18 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:44.190 03:51:18 -- target/filesystem.sh@25 -- # sync 00:07:44.190 03:51:18 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:44.190 03:51:18 -- target/filesystem.sh@27 -- # sync 00:07:44.190 03:51:18 -- target/filesystem.sh@29 -- # i=0 00:07:44.190 03:51:18 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:44.190 03:51:18 -- target/filesystem.sh@37 -- # kill -0 60964 00:07:44.190 03:51:18 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:44.190 03:51:18 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:44.190 03:51:18 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:44.190 03:51:18 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:44.190 ************************************ 00:07:44.190 END TEST filesystem_in_capsule_btrfs 00:07:44.190 ************************************ 00:07:44.190 00:07:44.190 real 0m0.268s 00:07:44.190 user 0m0.016s 00:07:44.190 sys 0m0.060s 00:07:44.190 03:51:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:44.190 03:51:18 -- common/autotest_common.sh@10 -- # set +x 00:07:44.190 03:51:18 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:44.190 03:51:18 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:44.190 03:51:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:44.190 03:51:18 -- common/autotest_common.sh@10 -- # set +x 00:07:44.190 ************************************ 00:07:44.190 START TEST filesystem_in_capsule_xfs 00:07:44.190 ************************************ 00:07:44.190 03:51:18 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:07:44.190 03:51:18 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:44.190 03:51:18 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:44.190 03:51:18 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:44.190 03:51:18 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:44.190 03:51:18 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:44.190 03:51:18 -- common/autotest_common.sh@914 -- # local i=0 00:07:44.190 03:51:18 -- common/autotest_common.sh@915 -- # local force 00:07:44.190 03:51:18 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:44.190 03:51:18 -- common/autotest_common.sh@920 -- # force=-f 00:07:44.190 03:51:18 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:44.190 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:44.190 = sectsz=512 attr=2, projid32bit=1 00:07:44.190 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:44.190 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:44.190 data = bsize=4096 blocks=130560, imaxpct=25 00:07:44.190 = sunit=0 swidth=0 blks 00:07:44.190 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:44.190 log =internal log bsize=4096 blocks=16384, version=2 00:07:44.190 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:44.190 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:44.190 Discarding blocks...Done. 00:07:44.190 03:51:19 -- common/autotest_common.sh@931 -- # return 0 00:07:44.190 03:51:19 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:46.090 03:51:21 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:46.090 03:51:21 -- target/filesystem.sh@25 -- # sync 00:07:46.090 03:51:21 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:46.090 03:51:21 -- target/filesystem.sh@27 -- # sync 00:07:46.090 03:51:21 -- target/filesystem.sh@29 -- # i=0 00:07:46.090 03:51:21 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:46.090 03:51:21 -- target/filesystem.sh@37 -- # kill -0 60964 00:07:46.090 03:51:21 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:46.090 03:51:21 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:46.090 03:51:21 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:46.090 03:51:21 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:46.090 ************************************ 00:07:46.090 END TEST filesystem_in_capsule_xfs 00:07:46.090 ************************************ 00:07:46.090 00:07:46.090 real 0m2.612s 00:07:46.090 user 0m0.025s 00:07:46.090 sys 0m0.053s 00:07:46.090 03:51:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:46.090 03:51:21 -- common/autotest_common.sh@10 -- # set +x 00:07:46.090 03:51:21 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:46.090 03:51:21 -- target/filesystem.sh@93 -- # sync 00:07:46.090 03:51:21 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:46.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:46.349 03:51:21 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:46.349 03:51:21 -- common/autotest_common.sh@1208 -- # local i=0 00:07:46.349 03:51:21 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:07:46.349 03:51:21 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:46.349 03:51:21 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:07:46.349 03:51:21 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:46.349 03:51:21 -- common/autotest_common.sh@1220 -- # return 0 00:07:46.349 03:51:21 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:46.349 03:51:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.349 03:51:21 -- common/autotest_common.sh@10 -- # set +x 00:07:46.349 03:51:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.349 03:51:21 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:46.349 03:51:21 -- target/filesystem.sh@101 -- # killprocess 60964 00:07:46.349 03:51:21 -- common/autotest_common.sh@936 -- # '[' -z 60964 ']' 00:07:46.349 03:51:21 -- common/autotest_common.sh@940 -- # kill -0 60964 00:07:46.349 03:51:21 -- common/autotest_common.sh@941 -- # uname 00:07:46.349 03:51:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:46.349 03:51:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60964 00:07:46.349 killing process with pid 60964 00:07:46.349 03:51:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:46.349 03:51:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:46.349 03:51:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60964' 00:07:46.349 03:51:21 -- common/autotest_common.sh@955 -- # kill 60964 00:07:46.349 03:51:21 -- common/autotest_common.sh@960 -- # wait 60964 00:07:46.915 03:51:21 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:46.915 00:07:46.915 real 0m13.975s 00:07:46.915 user 0m53.783s 00:07:46.915 sys 0m1.530s 00:07:46.915 03:51:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:46.915 ************************************ 00:07:46.915 END TEST nvmf_filesystem_in_capsule 00:07:46.915 ************************************ 00:07:46.915 03:51:21 -- common/autotest_common.sh@10 -- # set +x 00:07:46.915 03:51:21 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:46.915 03:51:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:46.915 03:51:21 -- nvmf/common.sh@116 -- # sync 00:07:46.915 03:51:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:46.915 03:51:21 -- nvmf/common.sh@119 -- # set +e 00:07:46.915 03:51:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:46.915 03:51:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:46.915 rmmod nvme_tcp 00:07:46.915 rmmod nvme_fabrics 00:07:46.915 rmmod nvme_keyring 00:07:46.915 03:51:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:46.915 03:51:21 -- nvmf/common.sh@123 -- # set -e 00:07:46.915 03:51:21 -- nvmf/common.sh@124 -- # return 0 00:07:46.915 03:51:21 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:07:46.915 03:51:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:46.915 03:51:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:46.915 03:51:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:46.915 03:51:21 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:46.915 03:51:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:46.915 03:51:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.915 03:51:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:46.915 03:51:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.915 03:51:22 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:46.915 00:07:46.915 real 0m29.744s 00:07:46.915 user 1m50.827s 00:07:46.915 sys 0m3.696s 00:07:46.915 03:51:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:46.915 ************************************ 00:07:46.915 03:51:22 -- common/autotest_common.sh@10 -- # set +x 00:07:46.915 END TEST nvmf_filesystem 00:07:46.916 ************************************ 00:07:47.174 03:51:22 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:47.174 03:51:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:47.174 03:51:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:47.174 03:51:22 -- common/autotest_common.sh@10 -- # set +x 00:07:47.174 ************************************ 00:07:47.174 START TEST nvmf_discovery 00:07:47.174 ************************************ 00:07:47.174 03:51:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:47.174 * Looking for test storage... 00:07:47.174 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:47.174 03:51:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:47.174 03:51:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:47.174 03:51:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:47.174 03:51:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:47.174 03:51:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:47.174 03:51:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:47.174 03:51:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:47.174 03:51:22 -- scripts/common.sh@335 -- # IFS=.-: 00:07:47.174 03:51:22 -- scripts/common.sh@335 -- # read -ra ver1 00:07:47.174 03:51:22 -- scripts/common.sh@336 -- # IFS=.-: 00:07:47.174 03:51:22 -- scripts/common.sh@336 -- # read -ra ver2 00:07:47.174 03:51:22 -- scripts/common.sh@337 -- # local 'op=<' 00:07:47.174 03:51:22 -- scripts/common.sh@339 -- # ver1_l=2 00:07:47.174 03:51:22 -- scripts/common.sh@340 -- # ver2_l=1 00:07:47.174 03:51:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:47.174 03:51:22 -- scripts/common.sh@343 -- # case "$op" in 00:07:47.174 03:51:22 -- scripts/common.sh@344 -- # : 1 00:07:47.174 03:51:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:47.174 03:51:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:47.174 03:51:22 -- scripts/common.sh@364 -- # decimal 1 00:07:47.174 03:51:22 -- scripts/common.sh@352 -- # local d=1 00:07:47.174 03:51:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:47.174 03:51:22 -- scripts/common.sh@354 -- # echo 1 00:07:47.174 03:51:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:47.174 03:51:22 -- scripts/common.sh@365 -- # decimal 2 00:07:47.174 03:51:22 -- scripts/common.sh@352 -- # local d=2 00:07:47.174 03:51:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:47.174 03:51:22 -- scripts/common.sh@354 -- # echo 2 00:07:47.174 03:51:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:47.174 03:51:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:47.174 03:51:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:47.174 03:51:22 -- scripts/common.sh@367 -- # return 0 00:07:47.174 03:51:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:47.174 03:51:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:47.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.174 --rc genhtml_branch_coverage=1 00:07:47.174 --rc genhtml_function_coverage=1 00:07:47.174 --rc genhtml_legend=1 00:07:47.174 --rc geninfo_all_blocks=1 00:07:47.174 --rc geninfo_unexecuted_blocks=1 00:07:47.174 00:07:47.174 ' 00:07:47.174 03:51:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:47.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.174 --rc genhtml_branch_coverage=1 00:07:47.174 --rc genhtml_function_coverage=1 00:07:47.174 --rc genhtml_legend=1 00:07:47.174 --rc geninfo_all_blocks=1 00:07:47.174 --rc geninfo_unexecuted_blocks=1 00:07:47.174 00:07:47.174 ' 00:07:47.174 03:51:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:47.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.174 --rc genhtml_branch_coverage=1 00:07:47.174 --rc genhtml_function_coverage=1 00:07:47.174 --rc genhtml_legend=1 00:07:47.174 --rc geninfo_all_blocks=1 00:07:47.174 --rc geninfo_unexecuted_blocks=1 00:07:47.174 00:07:47.174 ' 00:07:47.174 03:51:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:47.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.174 --rc genhtml_branch_coverage=1 00:07:47.174 --rc genhtml_function_coverage=1 00:07:47.174 --rc genhtml_legend=1 00:07:47.174 --rc geninfo_all_blocks=1 00:07:47.174 --rc geninfo_unexecuted_blocks=1 00:07:47.174 00:07:47.174 ' 00:07:47.174 03:51:22 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:47.175 03:51:22 -- nvmf/common.sh@7 -- # uname -s 00:07:47.175 03:51:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:47.175 03:51:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:47.175 03:51:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:47.175 03:51:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:47.175 03:51:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:47.175 03:51:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:47.175 03:51:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:47.175 03:51:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:47.175 03:51:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:47.175 03:51:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:47.175 03:51:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:07:47.175 03:51:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:07:47.175 03:51:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:47.175 03:51:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:47.175 03:51:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:47.175 03:51:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:47.175 03:51:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.175 03:51:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.175 03:51:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.175 03:51:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.175 03:51:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.175 03:51:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.175 03:51:22 -- paths/export.sh@5 -- # export PATH 00:07:47.175 03:51:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.175 03:51:22 -- nvmf/common.sh@46 -- # : 0 00:07:47.175 03:51:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:47.175 03:51:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:47.175 03:51:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:47.175 03:51:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:47.175 03:51:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:47.175 03:51:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:47.175 03:51:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:47.175 03:51:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:47.175 03:51:22 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:47.175 03:51:22 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:47.175 03:51:22 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:47.175 03:51:22 -- target/discovery.sh@15 -- # hash nvme 00:07:47.175 03:51:22 -- target/discovery.sh@20 -- # nvmftestinit 00:07:47.175 03:51:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:47.175 03:51:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:47.175 03:51:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:47.175 03:51:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:47.175 03:51:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:47.175 03:51:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.175 03:51:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:47.175 03:51:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.175 03:51:22 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:47.175 03:51:22 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:47.175 03:51:22 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:47.175 03:51:22 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:47.175 03:51:22 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:47.175 03:51:22 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:47.175 03:51:22 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:47.175 03:51:22 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:47.175 03:51:22 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:47.175 03:51:22 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:47.175 03:51:22 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:47.175 03:51:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:47.175 03:51:22 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:47.175 03:51:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:47.175 03:51:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:47.175 03:51:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:47.175 03:51:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:47.175 03:51:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:47.175 03:51:22 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:47.433 03:51:22 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:47.433 Cannot find device "nvmf_tgt_br" 00:07:47.433 03:51:22 -- nvmf/common.sh@154 -- # true 00:07:47.433 03:51:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:47.433 Cannot find device "nvmf_tgt_br2" 00:07:47.433 03:51:22 -- nvmf/common.sh@155 -- # true 00:07:47.433 03:51:22 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:47.433 03:51:22 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:47.433 Cannot find device "nvmf_tgt_br" 00:07:47.433 03:51:22 -- nvmf/common.sh@157 -- # true 00:07:47.433 03:51:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:47.433 Cannot find device "nvmf_tgt_br2" 00:07:47.433 03:51:22 -- nvmf/common.sh@158 -- # true 00:07:47.433 03:51:22 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:47.433 03:51:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:47.433 03:51:22 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:47.433 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:47.433 03:51:22 -- nvmf/common.sh@161 -- # true 00:07:47.433 03:51:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:47.433 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:47.433 03:51:22 -- nvmf/common.sh@162 -- # true 00:07:47.433 03:51:22 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:47.433 03:51:22 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:47.433 03:51:22 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:47.433 03:51:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:47.433 03:51:22 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:47.433 03:51:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:47.433 03:51:22 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:47.433 03:51:22 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:47.433 03:51:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:47.433 03:51:22 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:47.433 03:51:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:47.433 03:51:22 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:47.433 03:51:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:47.433 03:51:22 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:47.433 03:51:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:47.433 03:51:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:47.433 03:51:22 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:47.433 03:51:22 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:47.433 03:51:22 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:47.692 03:51:22 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:47.692 03:51:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:47.692 03:51:22 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:47.692 03:51:22 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:47.692 03:51:22 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:47.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:47.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:07:47.692 00:07:47.692 --- 10.0.0.2 ping statistics --- 00:07:47.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.692 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:07:47.692 03:51:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:47.692 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:47.692 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:07:47.692 00:07:47.692 --- 10.0.0.3 ping statistics --- 00:07:47.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.692 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:07:47.692 03:51:22 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:47.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:47.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:07:47.692 00:07:47.692 --- 10.0.0.1 ping statistics --- 00:07:47.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.692 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:07:47.692 03:51:22 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:47.692 03:51:22 -- nvmf/common.sh@421 -- # return 0 00:07:47.692 03:51:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:47.692 03:51:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:47.692 03:51:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:47.692 03:51:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:47.692 03:51:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:47.692 03:51:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:47.692 03:51:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:47.692 03:51:22 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:47.692 03:51:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:47.692 03:51:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:47.692 03:51:22 -- common/autotest_common.sh@10 -- # set +x 00:07:47.692 03:51:22 -- nvmf/common.sh@469 -- # nvmfpid=61516 00:07:47.692 03:51:22 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:47.692 03:51:22 -- nvmf/common.sh@470 -- # waitforlisten 61516 00:07:47.692 03:51:22 -- common/autotest_common.sh@829 -- # '[' -z 61516 ']' 00:07:47.692 03:51:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.692 03:51:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:47.692 03:51:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.692 03:51:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:47.692 03:51:22 -- common/autotest_common.sh@10 -- # set +x 00:07:47.692 [2024-11-08 03:51:22.695930] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:47.692 [2024-11-08 03:51:22.696025] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:47.951 [2024-11-08 03:51:22.836326] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:47.951 [2024-11-08 03:51:22.921824] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:47.951 [2024-11-08 03:51:22.921970] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:47.951 [2024-11-08 03:51:22.921984] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:47.951 [2024-11-08 03:51:22.921993] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:47.951 [2024-11-08 03:51:22.922123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.951 [2024-11-08 03:51:22.922395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.951 [2024-11-08 03:51:22.924746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.951 [2024-11-08 03:51:22.924845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.888 03:51:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:48.888 03:51:23 -- common/autotest_common.sh@862 -- # return 0 00:07:48.888 03:51:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:48.888 03:51:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:48.888 03:51:23 -- common/autotest_common.sh@10 -- # set +x 00:07:48.888 03:51:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:48.888 03:51:23 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:48.888 03:51:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.888 03:51:23 -- common/autotest_common.sh@10 -- # set +x 00:07:48.888 [2024-11-08 03:51:23.769684] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.888 03:51:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.888 03:51:23 -- target/discovery.sh@26 -- # seq 1 4 00:07:48.888 03:51:23 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:48.888 03:51:23 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:48.888 03:51:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.888 03:51:23 -- common/autotest_common.sh@10 -- # set +x 00:07:48.888 Null1 00:07:48.888 03:51:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.888 03:51:23 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:48.888 03:51:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.889 03:51:23 -- common/autotest_common.sh@10 -- # set +x 00:07:48.889 03:51:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.889 03:51:23 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:48.889 03:51:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.889 03:51:23 -- common/autotest_common.sh@10 -- # set +x 00:07:48.889 03:51:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.889 03:51:23 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:48.889 03:51:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.889 03:51:23 -- common/autotest_common.sh@10 -- # set +x 00:07:48.889 [2024-11-08 03:51:23.827570] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.889 03:51:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.889 03:51:23 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:48.889 03:51:23 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:48.889 03:51:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.889 03:51:23 -- common/autotest_common.sh@10 -- # set +x 00:07:48.889 Null2 00:07:48.889 03:51:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.889 03:51:23 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:48.889 03:51:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.889 03:51:23 -- common/autotest_common.sh@10 -- # set +x 00:07:48.889 03:51:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.889 03:51:23 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:48.889 03:51:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.889 03:51:23 -- common/autotest_common.sh@10 -- # set +x 00:07:48.889 03:51:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.889 03:51:23 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:48.889 03:51:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.889 03:51:23 -- common/autotest_common.sh@10 -- # set +x 00:07:48.889 03:51:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.889 03:51:23 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:48.889 03:51:23 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:48.889 03:51:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.889 03:51:23 -- common/autotest_common.sh@10 -- # set +x 00:07:48.889 Null3 00:07:48.889 03:51:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.889 03:51:23 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:48.889 03:51:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.889 03:51:23 -- common/autotest_common.sh@10 -- # set +x 00:07:48.889 03:51:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.889 03:51:23 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:48.889 03:51:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.889 03:51:23 -- common/autotest_common.sh@10 -- # set +x 00:07:48.889 03:51:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.889 03:51:23 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:48.889 03:51:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.889 03:51:23 -- common/autotest_common.sh@10 -- # set +x 00:07:48.889 03:51:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.889 03:51:23 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:48.889 03:51:23 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:48.889 03:51:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.889 03:51:23 -- common/autotest_common.sh@10 -- # set +x 00:07:48.889 Null4 00:07:48.889 03:51:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.889 03:51:23 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:48.889 03:51:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.889 03:51:23 -- common/autotest_common.sh@10 -- # set +x 00:07:48.889 03:51:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.889 03:51:23 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:48.889 03:51:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.889 03:51:23 -- common/autotest_common.sh@10 -- # set +x 00:07:48.889 03:51:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.889 03:51:23 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:48.889 03:51:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.889 03:51:23 -- common/autotest_common.sh@10 -- # set +x 00:07:48.889 03:51:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.889 03:51:23 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:48.889 03:51:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.889 03:51:23 -- common/autotest_common.sh@10 -- # set +x 00:07:48.889 03:51:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.889 03:51:23 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:48.889 03:51:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.889 03:51:23 -- common/autotest_common.sh@10 -- # set +x 00:07:48.889 03:51:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.889 03:51:23 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -a 10.0.0.2 -s 4420 00:07:49.148 00:07:49.148 Discovery Log Number of Records 6, Generation counter 6 00:07:49.148 =====Discovery Log Entry 0====== 00:07:49.148 trtype: tcp 00:07:49.148 adrfam: ipv4 00:07:49.148 subtype: current discovery subsystem 00:07:49.148 treq: not required 00:07:49.148 portid: 0 00:07:49.148 trsvcid: 4420 00:07:49.148 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:49.148 traddr: 10.0.0.2 00:07:49.148 eflags: explicit discovery connections, duplicate discovery information 00:07:49.148 sectype: none 00:07:49.148 =====Discovery Log Entry 1====== 00:07:49.148 trtype: tcp 00:07:49.148 adrfam: ipv4 00:07:49.148 subtype: nvme subsystem 00:07:49.148 treq: not required 00:07:49.148 portid: 0 00:07:49.148 trsvcid: 4420 00:07:49.148 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:49.148 traddr: 10.0.0.2 00:07:49.148 eflags: none 00:07:49.148 sectype: none 00:07:49.148 =====Discovery Log Entry 2====== 00:07:49.148 trtype: tcp 00:07:49.148 adrfam: ipv4 00:07:49.148 subtype: nvme subsystem 00:07:49.148 treq: not required 00:07:49.148 portid: 0 00:07:49.148 trsvcid: 4420 00:07:49.148 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:49.148 traddr: 10.0.0.2 00:07:49.148 eflags: none 00:07:49.148 sectype: none 00:07:49.148 =====Discovery Log Entry 3====== 00:07:49.148 trtype: tcp 00:07:49.148 adrfam: ipv4 00:07:49.148 subtype: nvme subsystem 00:07:49.148 treq: not required 00:07:49.148 portid: 0 00:07:49.148 trsvcid: 4420 00:07:49.148 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:49.148 traddr: 10.0.0.2 00:07:49.148 eflags: none 00:07:49.148 sectype: none 00:07:49.148 =====Discovery Log Entry 4====== 00:07:49.148 trtype: tcp 00:07:49.148 adrfam: ipv4 00:07:49.148 subtype: nvme subsystem 00:07:49.148 treq: not required 00:07:49.148 portid: 0 00:07:49.148 trsvcid: 4420 00:07:49.149 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:49.149 traddr: 10.0.0.2 00:07:49.149 eflags: none 00:07:49.149 sectype: none 00:07:49.149 =====Discovery Log Entry 5====== 00:07:49.149 trtype: tcp 00:07:49.149 adrfam: ipv4 00:07:49.149 subtype: discovery subsystem referral 00:07:49.149 treq: not required 00:07:49.149 portid: 0 00:07:49.149 trsvcid: 4430 00:07:49.149 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:49.149 traddr: 10.0.0.2 00:07:49.149 eflags: none 00:07:49.149 sectype: none 00:07:49.149 Perform nvmf subsystem discovery via RPC 00:07:49.149 03:51:24 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:49.149 03:51:24 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:49.149 03:51:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.149 03:51:24 -- common/autotest_common.sh@10 -- # set +x 00:07:49.149 [2024-11-08 03:51:24.071696] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:07:49.149 [ 00:07:49.149 { 00:07:49.149 "allow_any_host": true, 00:07:49.149 "hosts": [], 00:07:49.149 "listen_addresses": [ 00:07:49.149 { 00:07:49.149 "adrfam": "IPv4", 00:07:49.149 "traddr": "10.0.0.2", 00:07:49.149 "transport": "TCP", 00:07:49.149 "trsvcid": "4420", 00:07:49.149 "trtype": "TCP" 00:07:49.149 } 00:07:49.149 ], 00:07:49.149 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:49.149 "subtype": "Discovery" 00:07:49.149 }, 00:07:49.149 { 00:07:49.149 "allow_any_host": true, 00:07:49.149 "hosts": [], 00:07:49.149 "listen_addresses": [ 00:07:49.149 { 00:07:49.149 "adrfam": "IPv4", 00:07:49.149 "traddr": "10.0.0.2", 00:07:49.149 "transport": "TCP", 00:07:49.149 "trsvcid": "4420", 00:07:49.149 "trtype": "TCP" 00:07:49.149 } 00:07:49.149 ], 00:07:49.149 "max_cntlid": 65519, 00:07:49.149 "max_namespaces": 32, 00:07:49.149 "min_cntlid": 1, 00:07:49.149 "model_number": "SPDK bdev Controller", 00:07:49.149 "namespaces": [ 00:07:49.149 { 00:07:49.149 "bdev_name": "Null1", 00:07:49.149 "name": "Null1", 00:07:49.149 "nguid": "A45A0B3C864343969B8670222AC9AC3B", 00:07:49.149 "nsid": 1, 00:07:49.149 "uuid": "a45a0b3c-8643-4396-9b86-70222ac9ac3b" 00:07:49.149 } 00:07:49.149 ], 00:07:49.149 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:49.149 "serial_number": "SPDK00000000000001", 00:07:49.149 "subtype": "NVMe" 00:07:49.149 }, 00:07:49.149 { 00:07:49.149 "allow_any_host": true, 00:07:49.149 "hosts": [], 00:07:49.149 "listen_addresses": [ 00:07:49.149 { 00:07:49.149 "adrfam": "IPv4", 00:07:49.149 "traddr": "10.0.0.2", 00:07:49.149 "transport": "TCP", 00:07:49.149 "trsvcid": "4420", 00:07:49.149 "trtype": "TCP" 00:07:49.149 } 00:07:49.149 ], 00:07:49.149 "max_cntlid": 65519, 00:07:49.149 "max_namespaces": 32, 00:07:49.149 "min_cntlid": 1, 00:07:49.149 "model_number": "SPDK bdev Controller", 00:07:49.149 "namespaces": [ 00:07:49.149 { 00:07:49.149 "bdev_name": "Null2", 00:07:49.149 "name": "Null2", 00:07:49.149 "nguid": "824151BB1D6B46128ECB3E64E89B1BE5", 00:07:49.149 "nsid": 1, 00:07:49.149 "uuid": "824151bb-1d6b-4612-8ecb-3e64e89b1be5" 00:07:49.149 } 00:07:49.149 ], 00:07:49.149 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:49.149 "serial_number": "SPDK00000000000002", 00:07:49.149 "subtype": "NVMe" 00:07:49.149 }, 00:07:49.149 { 00:07:49.149 "allow_any_host": true, 00:07:49.149 "hosts": [], 00:07:49.149 "listen_addresses": [ 00:07:49.149 { 00:07:49.149 "adrfam": "IPv4", 00:07:49.149 "traddr": "10.0.0.2", 00:07:49.149 "transport": "TCP", 00:07:49.149 "trsvcid": "4420", 00:07:49.149 "trtype": "TCP" 00:07:49.149 } 00:07:49.149 ], 00:07:49.149 "max_cntlid": 65519, 00:07:49.149 "max_namespaces": 32, 00:07:49.149 "min_cntlid": 1, 00:07:49.149 "model_number": "SPDK bdev Controller", 00:07:49.149 "namespaces": [ 00:07:49.149 { 00:07:49.149 "bdev_name": "Null3", 00:07:49.149 "name": "Null3", 00:07:49.149 "nguid": "4FE7C27C4CA5449B8F9DE921B207B03C", 00:07:49.149 "nsid": 1, 00:07:49.149 "uuid": "4fe7c27c-4ca5-449b-8f9d-e921b207b03c" 00:07:49.149 } 00:07:49.149 ], 00:07:49.149 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:49.149 "serial_number": "SPDK00000000000003", 00:07:49.149 "subtype": "NVMe" 00:07:49.149 }, 00:07:49.149 { 00:07:49.149 "allow_any_host": true, 00:07:49.149 "hosts": [], 00:07:49.149 "listen_addresses": [ 00:07:49.149 { 00:07:49.149 "adrfam": "IPv4", 00:07:49.149 "traddr": "10.0.0.2", 00:07:49.149 "transport": "TCP", 00:07:49.149 "trsvcid": "4420", 00:07:49.149 "trtype": "TCP" 00:07:49.149 } 00:07:49.149 ], 00:07:49.149 "max_cntlid": 65519, 00:07:49.149 "max_namespaces": 32, 00:07:49.149 "min_cntlid": 1, 00:07:49.149 "model_number": "SPDK bdev Controller", 00:07:49.149 "namespaces": [ 00:07:49.149 { 00:07:49.149 "bdev_name": "Null4", 00:07:49.149 "name": "Null4", 00:07:49.149 "nguid": "DF7B4C8D8CCB4747B097E9466AB329F5", 00:07:49.149 "nsid": 1, 00:07:49.149 "uuid": "df7b4c8d-8ccb-4747-b097-e9466ab329f5" 00:07:49.149 } 00:07:49.149 ], 00:07:49.149 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:49.149 "serial_number": "SPDK00000000000004", 00:07:49.149 "subtype": "NVMe" 00:07:49.149 } 00:07:49.149 ] 00:07:49.149 03:51:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.149 03:51:24 -- target/discovery.sh@42 -- # seq 1 4 00:07:49.149 03:51:24 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:49.149 03:51:24 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:49.149 03:51:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.149 03:51:24 -- common/autotest_common.sh@10 -- # set +x 00:07:49.149 03:51:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.149 03:51:24 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:49.149 03:51:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.149 03:51:24 -- common/autotest_common.sh@10 -- # set +x 00:07:49.149 03:51:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.149 03:51:24 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:49.149 03:51:24 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:49.149 03:51:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.149 03:51:24 -- common/autotest_common.sh@10 -- # set +x 00:07:49.149 03:51:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.149 03:51:24 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:49.149 03:51:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.149 03:51:24 -- common/autotest_common.sh@10 -- # set +x 00:07:49.149 03:51:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.149 03:51:24 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:49.149 03:51:24 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:49.149 03:51:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.149 03:51:24 -- common/autotest_common.sh@10 -- # set +x 00:07:49.149 03:51:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.149 03:51:24 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:49.149 03:51:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.149 03:51:24 -- common/autotest_common.sh@10 -- # set +x 00:07:49.149 03:51:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.149 03:51:24 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:49.149 03:51:24 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:49.149 03:51:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.149 03:51:24 -- common/autotest_common.sh@10 -- # set +x 00:07:49.149 03:51:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.149 03:51:24 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:49.149 03:51:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.149 03:51:24 -- common/autotest_common.sh@10 -- # set +x 00:07:49.149 03:51:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.149 03:51:24 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:49.149 03:51:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.149 03:51:24 -- common/autotest_common.sh@10 -- # set +x 00:07:49.149 03:51:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.149 03:51:24 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:49.149 03:51:24 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:49.149 03:51:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.149 03:51:24 -- common/autotest_common.sh@10 -- # set +x 00:07:49.149 03:51:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.149 03:51:24 -- target/discovery.sh@49 -- # check_bdevs= 00:07:49.149 03:51:24 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:49.149 03:51:24 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:49.149 03:51:24 -- target/discovery.sh@57 -- # nvmftestfini 00:07:49.149 03:51:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:49.149 03:51:24 -- nvmf/common.sh@116 -- # sync 00:07:49.149 03:51:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:49.149 03:51:24 -- nvmf/common.sh@119 -- # set +e 00:07:49.149 03:51:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:49.149 03:51:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:49.408 rmmod nvme_tcp 00:07:49.408 rmmod nvme_fabrics 00:07:49.408 rmmod nvme_keyring 00:07:49.408 03:51:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:49.408 03:51:24 -- nvmf/common.sh@123 -- # set -e 00:07:49.408 03:51:24 -- nvmf/common.sh@124 -- # return 0 00:07:49.408 03:51:24 -- nvmf/common.sh@477 -- # '[' -n 61516 ']' 00:07:49.408 03:51:24 -- nvmf/common.sh@478 -- # killprocess 61516 00:07:49.408 03:51:24 -- common/autotest_common.sh@936 -- # '[' -z 61516 ']' 00:07:49.408 03:51:24 -- common/autotest_common.sh@940 -- # kill -0 61516 00:07:49.408 03:51:24 -- common/autotest_common.sh@941 -- # uname 00:07:49.408 03:51:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:49.408 03:51:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61516 00:07:49.408 killing process with pid 61516 00:07:49.408 03:51:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:49.408 03:51:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:49.408 03:51:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61516' 00:07:49.408 03:51:24 -- common/autotest_common.sh@955 -- # kill 61516 00:07:49.408 [2024-11-08 03:51:24.337816] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:07:49.408 03:51:24 -- common/autotest_common.sh@960 -- # wait 61516 00:07:49.667 03:51:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:49.667 03:51:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:49.667 03:51:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:49.667 03:51:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:49.667 03:51:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:49.667 03:51:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.667 03:51:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:49.667 03:51:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.667 03:51:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:49.667 ************************************ 00:07:49.667 END TEST nvmf_discovery 00:07:49.667 ************************************ 00:07:49.667 00:07:49.667 real 0m2.568s 00:07:49.667 user 0m6.952s 00:07:49.667 sys 0m0.642s 00:07:49.667 03:51:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:49.667 03:51:24 -- common/autotest_common.sh@10 -- # set +x 00:07:49.667 03:51:24 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:49.667 03:51:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:49.667 03:51:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.667 03:51:24 -- common/autotest_common.sh@10 -- # set +x 00:07:49.667 ************************************ 00:07:49.667 START TEST nvmf_referrals 00:07:49.667 ************************************ 00:07:49.667 03:51:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:49.667 * Looking for test storage... 00:07:49.667 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:49.667 03:51:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:49.667 03:51:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:49.667 03:51:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:49.927 03:51:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:49.927 03:51:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:49.927 03:51:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:49.927 03:51:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:49.927 03:51:24 -- scripts/common.sh@335 -- # IFS=.-: 00:07:49.927 03:51:24 -- scripts/common.sh@335 -- # read -ra ver1 00:07:49.927 03:51:24 -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.927 03:51:24 -- scripts/common.sh@336 -- # read -ra ver2 00:07:49.927 03:51:24 -- scripts/common.sh@337 -- # local 'op=<' 00:07:49.927 03:51:24 -- scripts/common.sh@339 -- # ver1_l=2 00:07:49.927 03:51:24 -- scripts/common.sh@340 -- # ver2_l=1 00:07:49.927 03:51:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:49.927 03:51:24 -- scripts/common.sh@343 -- # case "$op" in 00:07:49.927 03:51:24 -- scripts/common.sh@344 -- # : 1 00:07:49.927 03:51:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:49.927 03:51:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.927 03:51:24 -- scripts/common.sh@364 -- # decimal 1 00:07:49.927 03:51:24 -- scripts/common.sh@352 -- # local d=1 00:07:49.927 03:51:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.927 03:51:24 -- scripts/common.sh@354 -- # echo 1 00:07:49.927 03:51:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:49.927 03:51:24 -- scripts/common.sh@365 -- # decimal 2 00:07:49.927 03:51:24 -- scripts/common.sh@352 -- # local d=2 00:07:49.927 03:51:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.927 03:51:24 -- scripts/common.sh@354 -- # echo 2 00:07:49.927 03:51:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:49.927 03:51:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:49.927 03:51:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:49.927 03:51:24 -- scripts/common.sh@367 -- # return 0 00:07:49.927 03:51:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.927 03:51:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:49.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.927 --rc genhtml_branch_coverage=1 00:07:49.927 --rc genhtml_function_coverage=1 00:07:49.927 --rc genhtml_legend=1 00:07:49.927 --rc geninfo_all_blocks=1 00:07:49.927 --rc geninfo_unexecuted_blocks=1 00:07:49.927 00:07:49.927 ' 00:07:49.927 03:51:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:49.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.927 --rc genhtml_branch_coverage=1 00:07:49.927 --rc genhtml_function_coverage=1 00:07:49.927 --rc genhtml_legend=1 00:07:49.927 --rc geninfo_all_blocks=1 00:07:49.927 --rc geninfo_unexecuted_blocks=1 00:07:49.927 00:07:49.927 ' 00:07:49.927 03:51:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:49.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.927 --rc genhtml_branch_coverage=1 00:07:49.927 --rc genhtml_function_coverage=1 00:07:49.927 --rc genhtml_legend=1 00:07:49.927 --rc geninfo_all_blocks=1 00:07:49.927 --rc geninfo_unexecuted_blocks=1 00:07:49.927 00:07:49.927 ' 00:07:49.927 03:51:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:49.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.927 --rc genhtml_branch_coverage=1 00:07:49.927 --rc genhtml_function_coverage=1 00:07:49.927 --rc genhtml_legend=1 00:07:49.927 --rc geninfo_all_blocks=1 00:07:49.927 --rc geninfo_unexecuted_blocks=1 00:07:49.927 00:07:49.927 ' 00:07:49.927 03:51:24 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:49.927 03:51:24 -- nvmf/common.sh@7 -- # uname -s 00:07:49.927 03:51:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:49.927 03:51:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:49.927 03:51:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:49.927 03:51:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:49.927 03:51:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:49.927 03:51:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:49.927 03:51:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:49.927 03:51:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:49.927 03:51:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:49.927 03:51:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:49.927 03:51:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:07:49.927 03:51:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:07:49.927 03:51:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:49.927 03:51:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:49.927 03:51:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:49.927 03:51:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:49.927 03:51:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.927 03:51:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.927 03:51:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.927 03:51:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.927 03:51:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.927 03:51:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.927 03:51:24 -- paths/export.sh@5 -- # export PATH 00:07:49.927 03:51:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.927 03:51:24 -- nvmf/common.sh@46 -- # : 0 00:07:49.927 03:51:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:49.927 03:51:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:49.927 03:51:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:49.927 03:51:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:49.927 03:51:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:49.927 03:51:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:49.927 03:51:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:49.927 03:51:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:49.927 03:51:24 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:49.927 03:51:24 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:49.927 03:51:24 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:49.927 03:51:24 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:49.927 03:51:24 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:49.927 03:51:24 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:49.927 03:51:24 -- target/referrals.sh@37 -- # nvmftestinit 00:07:49.927 03:51:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:49.927 03:51:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:49.927 03:51:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:49.927 03:51:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:49.927 03:51:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:49.927 03:51:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.927 03:51:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:49.927 03:51:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.927 03:51:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:49.927 03:51:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:49.927 03:51:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:49.927 03:51:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:49.927 03:51:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:49.927 03:51:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:49.927 03:51:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:49.927 03:51:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:49.928 03:51:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:49.928 03:51:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:49.928 03:51:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:49.928 03:51:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:49.928 03:51:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:49.928 03:51:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:49.928 03:51:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:49.928 03:51:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:49.928 03:51:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:49.928 03:51:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:49.928 03:51:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:49.928 03:51:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:49.928 Cannot find device "nvmf_tgt_br" 00:07:49.928 03:51:24 -- nvmf/common.sh@154 -- # true 00:07:49.928 03:51:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:49.928 Cannot find device "nvmf_tgt_br2" 00:07:49.928 03:51:24 -- nvmf/common.sh@155 -- # true 00:07:49.928 03:51:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:49.928 03:51:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:49.928 Cannot find device "nvmf_tgt_br" 00:07:49.928 03:51:24 -- nvmf/common.sh@157 -- # true 00:07:49.928 03:51:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:49.928 Cannot find device "nvmf_tgt_br2" 00:07:49.928 03:51:24 -- nvmf/common.sh@158 -- # true 00:07:49.928 03:51:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:49.928 03:51:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:49.928 03:51:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:49.928 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:49.928 03:51:25 -- nvmf/common.sh@161 -- # true 00:07:49.928 03:51:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:49.928 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:49.928 03:51:25 -- nvmf/common.sh@162 -- # true 00:07:49.928 03:51:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:49.928 03:51:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:49.928 03:51:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:49.928 03:51:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:49.928 03:51:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:50.186 03:51:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:50.186 03:51:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:50.186 03:51:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:50.186 03:51:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:50.186 03:51:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:50.186 03:51:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:50.186 03:51:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:50.186 03:51:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:50.186 03:51:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:50.186 03:51:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:50.186 03:51:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:50.186 03:51:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:50.186 03:51:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:50.187 03:51:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:50.187 03:51:25 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:50.187 03:51:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:50.187 03:51:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:50.187 03:51:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:50.187 03:51:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:50.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:50.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:07:50.187 00:07:50.187 --- 10.0.0.2 ping statistics --- 00:07:50.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.187 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:07:50.187 03:51:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:50.187 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:50.187 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:07:50.187 00:07:50.187 --- 10.0.0.3 ping statistics --- 00:07:50.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.187 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:07:50.187 03:51:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:50.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:50.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:07:50.187 00:07:50.187 --- 10.0.0.1 ping statistics --- 00:07:50.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.187 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:07:50.187 03:51:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:50.187 03:51:25 -- nvmf/common.sh@421 -- # return 0 00:07:50.187 03:51:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:50.187 03:51:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:50.187 03:51:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:50.187 03:51:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:50.187 03:51:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:50.187 03:51:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:50.187 03:51:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:50.187 03:51:25 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:50.187 03:51:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:50.187 03:51:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:50.187 03:51:25 -- common/autotest_common.sh@10 -- # set +x 00:07:50.187 03:51:25 -- nvmf/common.sh@469 -- # nvmfpid=61750 00:07:50.187 03:51:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:50.187 03:51:25 -- nvmf/common.sh@470 -- # waitforlisten 61750 00:07:50.187 03:51:25 -- common/autotest_common.sh@829 -- # '[' -z 61750 ']' 00:07:50.187 03:51:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.187 03:51:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:50.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.187 03:51:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.187 03:51:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:50.187 03:51:25 -- common/autotest_common.sh@10 -- # set +x 00:07:50.187 [2024-11-08 03:51:25.261369] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:50.187 [2024-11-08 03:51:25.261984] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.446 [2024-11-08 03:51:25.405090] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:50.446 [2024-11-08 03:51:25.486194] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:50.446 [2024-11-08 03:51:25.486356] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:50.446 [2024-11-08 03:51:25.486369] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:50.446 [2024-11-08 03:51:25.486377] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:50.446 [2024-11-08 03:51:25.486516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.446 [2024-11-08 03:51:25.486989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.446 [2024-11-08 03:51:25.487534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:50.446 [2024-11-08 03:51:25.487542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.381 03:51:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:51.381 03:51:26 -- common/autotest_common.sh@862 -- # return 0 00:07:51.381 03:51:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:51.381 03:51:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:51.381 03:51:26 -- common/autotest_common.sh@10 -- # set +x 00:07:51.382 03:51:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:51.382 03:51:26 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:51.382 03:51:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.382 03:51:26 -- common/autotest_common.sh@10 -- # set +x 00:07:51.382 [2024-11-08 03:51:26.238220] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:51.382 03:51:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.382 03:51:26 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:51.382 03:51:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.382 03:51:26 -- common/autotest_common.sh@10 -- # set +x 00:07:51.382 [2024-11-08 03:51:26.257585] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:51.382 03:51:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.382 03:51:26 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:51.382 03:51:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.382 03:51:26 -- common/autotest_common.sh@10 -- # set +x 00:07:51.382 03:51:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.382 03:51:26 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:51.382 03:51:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.382 03:51:26 -- common/autotest_common.sh@10 -- # set +x 00:07:51.382 03:51:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.382 03:51:26 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:51.382 03:51:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.382 03:51:26 -- common/autotest_common.sh@10 -- # set +x 00:07:51.382 03:51:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.382 03:51:26 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:51.382 03:51:26 -- target/referrals.sh@48 -- # jq length 00:07:51.382 03:51:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.382 03:51:26 -- common/autotest_common.sh@10 -- # set +x 00:07:51.382 03:51:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.382 03:51:26 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:51.382 03:51:26 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:51.382 03:51:26 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:51.382 03:51:26 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:51.382 03:51:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.382 03:51:26 -- common/autotest_common.sh@10 -- # set +x 00:07:51.382 03:51:26 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:51.382 03:51:26 -- target/referrals.sh@21 -- # sort 00:07:51.382 03:51:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.382 03:51:26 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:51.382 03:51:26 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:51.382 03:51:26 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:51.382 03:51:26 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:51.382 03:51:26 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:51.382 03:51:26 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:51.382 03:51:26 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:51.382 03:51:26 -- target/referrals.sh@26 -- # sort 00:07:51.642 03:51:26 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:51.642 03:51:26 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:51.642 03:51:26 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:51.642 03:51:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.642 03:51:26 -- common/autotest_common.sh@10 -- # set +x 00:07:51.642 03:51:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.642 03:51:26 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:51.642 03:51:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.642 03:51:26 -- common/autotest_common.sh@10 -- # set +x 00:07:51.642 03:51:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.642 03:51:26 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:51.642 03:51:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.642 03:51:26 -- common/autotest_common.sh@10 -- # set +x 00:07:51.642 03:51:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.642 03:51:26 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:51.642 03:51:26 -- target/referrals.sh@56 -- # jq length 00:07:51.642 03:51:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.642 03:51:26 -- common/autotest_common.sh@10 -- # set +x 00:07:51.642 03:51:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.642 03:51:26 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:51.642 03:51:26 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:51.642 03:51:26 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:51.642 03:51:26 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:51.642 03:51:26 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:51.642 03:51:26 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:51.642 03:51:26 -- target/referrals.sh@26 -- # sort 00:07:51.642 03:51:26 -- target/referrals.sh@26 -- # echo 00:07:51.642 03:51:26 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:51.642 03:51:26 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:51.642 03:51:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.642 03:51:26 -- common/autotest_common.sh@10 -- # set +x 00:07:51.642 03:51:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.642 03:51:26 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:51.642 03:51:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.642 03:51:26 -- common/autotest_common.sh@10 -- # set +x 00:07:51.642 03:51:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.642 03:51:26 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:51.642 03:51:26 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:51.642 03:51:26 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:51.642 03:51:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.642 03:51:26 -- common/autotest_common.sh@10 -- # set +x 00:07:51.642 03:51:26 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:51.642 03:51:26 -- target/referrals.sh@21 -- # sort 00:07:51.903 03:51:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.903 03:51:26 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:51.903 03:51:26 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:51.903 03:51:26 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:51.903 03:51:26 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:51.903 03:51:26 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:51.903 03:51:26 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:51.903 03:51:26 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:51.903 03:51:26 -- target/referrals.sh@26 -- # sort 00:07:51.903 03:51:26 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:51.903 03:51:26 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:51.903 03:51:26 -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:51.903 03:51:26 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:51.903 03:51:26 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:51.903 03:51:26 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:51.903 03:51:26 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:52.162 03:51:27 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:52.162 03:51:27 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:52.162 03:51:27 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:52.162 03:51:27 -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:52.162 03:51:27 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:52.162 03:51:27 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:52.162 03:51:27 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:52.162 03:51:27 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:52.162 03:51:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.162 03:51:27 -- common/autotest_common.sh@10 -- # set +x 00:07:52.162 03:51:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.162 03:51:27 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:52.162 03:51:27 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:52.162 03:51:27 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:52.162 03:51:27 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:52.162 03:51:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.162 03:51:27 -- common/autotest_common.sh@10 -- # set +x 00:07:52.162 03:51:27 -- target/referrals.sh@21 -- # sort 00:07:52.162 03:51:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.162 03:51:27 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:52.162 03:51:27 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:52.162 03:51:27 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:52.162 03:51:27 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:52.162 03:51:27 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:52.162 03:51:27 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:52.162 03:51:27 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:52.162 03:51:27 -- target/referrals.sh@26 -- # sort 00:07:52.421 03:51:27 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:52.421 03:51:27 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:52.421 03:51:27 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:52.421 03:51:27 -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:52.421 03:51:27 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:52.421 03:51:27 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:52.421 03:51:27 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:52.421 03:51:27 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:52.421 03:51:27 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:52.421 03:51:27 -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:52.421 03:51:27 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:52.421 03:51:27 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:52.421 03:51:27 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:52.680 03:51:27 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:52.680 03:51:27 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:52.680 03:51:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.680 03:51:27 -- common/autotest_common.sh@10 -- # set +x 00:07:52.680 03:51:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.680 03:51:27 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:52.680 03:51:27 -- target/referrals.sh@82 -- # jq length 00:07:52.680 03:51:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.680 03:51:27 -- common/autotest_common.sh@10 -- # set +x 00:07:52.680 03:51:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.680 03:51:27 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:52.680 03:51:27 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:52.680 03:51:27 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:52.680 03:51:27 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:52.680 03:51:27 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:52.680 03:51:27 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:52.680 03:51:27 -- target/referrals.sh@26 -- # sort 00:07:52.680 03:51:27 -- target/referrals.sh@26 -- # echo 00:07:52.680 03:51:27 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:52.680 03:51:27 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:52.680 03:51:27 -- target/referrals.sh@86 -- # nvmftestfini 00:07:52.680 03:51:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:52.680 03:51:27 -- nvmf/common.sh@116 -- # sync 00:07:52.939 03:51:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:52.939 03:51:27 -- nvmf/common.sh@119 -- # set +e 00:07:52.939 03:51:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:52.939 03:51:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:52.939 rmmod nvme_tcp 00:07:52.939 rmmod nvme_fabrics 00:07:52.939 rmmod nvme_keyring 00:07:52.939 03:51:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:52.939 03:51:27 -- nvmf/common.sh@123 -- # set -e 00:07:52.939 03:51:27 -- nvmf/common.sh@124 -- # return 0 00:07:52.939 03:51:27 -- nvmf/common.sh@477 -- # '[' -n 61750 ']' 00:07:52.939 03:51:27 -- nvmf/common.sh@478 -- # killprocess 61750 00:07:52.939 03:51:27 -- common/autotest_common.sh@936 -- # '[' -z 61750 ']' 00:07:52.939 03:51:27 -- common/autotest_common.sh@940 -- # kill -0 61750 00:07:52.939 03:51:27 -- common/autotest_common.sh@941 -- # uname 00:07:52.939 03:51:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:52.939 03:51:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61750 00:07:52.939 03:51:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:52.939 03:51:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:52.939 killing process with pid 61750 00:07:52.939 03:51:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61750' 00:07:52.939 03:51:27 -- common/autotest_common.sh@955 -- # kill 61750 00:07:52.939 03:51:27 -- common/autotest_common.sh@960 -- # wait 61750 00:07:53.197 03:51:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:53.197 03:51:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:53.197 03:51:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:53.197 03:51:28 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:53.197 03:51:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:53.197 03:51:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.197 03:51:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:53.197 03:51:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.197 03:51:28 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:53.197 ************************************ 00:07:53.197 END TEST nvmf_referrals 00:07:53.197 ************************************ 00:07:53.197 00:07:53.197 real 0m3.532s 00:07:53.197 user 0m11.534s 00:07:53.197 sys 0m0.918s 00:07:53.197 03:51:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:53.197 03:51:28 -- common/autotest_common.sh@10 -- # set +x 00:07:53.197 03:51:28 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:53.197 03:51:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:53.197 03:51:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:53.197 03:51:28 -- common/autotest_common.sh@10 -- # set +x 00:07:53.197 ************************************ 00:07:53.197 START TEST nvmf_connect_disconnect 00:07:53.198 ************************************ 00:07:53.198 03:51:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:53.457 * Looking for test storage... 00:07:53.457 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:53.457 03:51:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:53.457 03:51:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:53.457 03:51:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:53.457 03:51:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:53.457 03:51:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:53.457 03:51:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:53.457 03:51:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:53.457 03:51:28 -- scripts/common.sh@335 -- # IFS=.-: 00:07:53.457 03:51:28 -- scripts/common.sh@335 -- # read -ra ver1 00:07:53.457 03:51:28 -- scripts/common.sh@336 -- # IFS=.-: 00:07:53.457 03:51:28 -- scripts/common.sh@336 -- # read -ra ver2 00:07:53.457 03:51:28 -- scripts/common.sh@337 -- # local 'op=<' 00:07:53.457 03:51:28 -- scripts/common.sh@339 -- # ver1_l=2 00:07:53.457 03:51:28 -- scripts/common.sh@340 -- # ver2_l=1 00:07:53.457 03:51:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:53.457 03:51:28 -- scripts/common.sh@343 -- # case "$op" in 00:07:53.457 03:51:28 -- scripts/common.sh@344 -- # : 1 00:07:53.457 03:51:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:53.457 03:51:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:53.457 03:51:28 -- scripts/common.sh@364 -- # decimal 1 00:07:53.457 03:51:28 -- scripts/common.sh@352 -- # local d=1 00:07:53.457 03:51:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:53.457 03:51:28 -- scripts/common.sh@354 -- # echo 1 00:07:53.457 03:51:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:53.457 03:51:28 -- scripts/common.sh@365 -- # decimal 2 00:07:53.457 03:51:28 -- scripts/common.sh@352 -- # local d=2 00:07:53.457 03:51:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:53.457 03:51:28 -- scripts/common.sh@354 -- # echo 2 00:07:53.457 03:51:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:53.457 03:51:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:53.457 03:51:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:53.457 03:51:28 -- scripts/common.sh@367 -- # return 0 00:07:53.457 03:51:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:53.457 03:51:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:53.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.457 --rc genhtml_branch_coverage=1 00:07:53.457 --rc genhtml_function_coverage=1 00:07:53.457 --rc genhtml_legend=1 00:07:53.457 --rc geninfo_all_blocks=1 00:07:53.457 --rc geninfo_unexecuted_blocks=1 00:07:53.457 00:07:53.457 ' 00:07:53.457 03:51:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:53.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.457 --rc genhtml_branch_coverage=1 00:07:53.457 --rc genhtml_function_coverage=1 00:07:53.457 --rc genhtml_legend=1 00:07:53.457 --rc geninfo_all_blocks=1 00:07:53.457 --rc geninfo_unexecuted_blocks=1 00:07:53.457 00:07:53.457 ' 00:07:53.457 03:51:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:53.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.457 --rc genhtml_branch_coverage=1 00:07:53.457 --rc genhtml_function_coverage=1 00:07:53.457 --rc genhtml_legend=1 00:07:53.457 --rc geninfo_all_blocks=1 00:07:53.457 --rc geninfo_unexecuted_blocks=1 00:07:53.457 00:07:53.457 ' 00:07:53.457 03:51:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:53.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.457 --rc genhtml_branch_coverage=1 00:07:53.457 --rc genhtml_function_coverage=1 00:07:53.457 --rc genhtml_legend=1 00:07:53.457 --rc geninfo_all_blocks=1 00:07:53.457 --rc geninfo_unexecuted_blocks=1 00:07:53.457 00:07:53.457 ' 00:07:53.457 03:51:28 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:53.457 03:51:28 -- nvmf/common.sh@7 -- # uname -s 00:07:53.457 03:51:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.457 03:51:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.457 03:51:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.457 03:51:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.457 03:51:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.457 03:51:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.457 03:51:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.457 03:51:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.457 03:51:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.457 03:51:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.457 03:51:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:07:53.457 03:51:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:07:53.457 03:51:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.457 03:51:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.457 03:51:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:53.457 03:51:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:53.457 03:51:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.457 03:51:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.457 03:51:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.457 03:51:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.457 03:51:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.457 03:51:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.457 03:51:28 -- paths/export.sh@5 -- # export PATH 00:07:53.457 03:51:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.457 03:51:28 -- nvmf/common.sh@46 -- # : 0 00:07:53.457 03:51:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:53.457 03:51:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:53.457 03:51:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:53.457 03:51:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.457 03:51:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.457 03:51:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:53.457 03:51:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:53.457 03:51:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:53.457 03:51:28 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:53.457 03:51:28 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:53.457 03:51:28 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:53.457 03:51:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:53.457 03:51:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:53.457 03:51:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:53.457 03:51:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:53.457 03:51:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:53.457 03:51:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.457 03:51:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:53.457 03:51:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.457 03:51:28 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:53.457 03:51:28 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:53.457 03:51:28 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:53.457 03:51:28 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:53.457 03:51:28 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:53.457 03:51:28 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:53.457 03:51:28 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.457 03:51:28 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.457 03:51:28 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:53.457 03:51:28 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:53.458 03:51:28 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:53.458 03:51:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:53.458 03:51:28 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:53.458 03:51:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.458 03:51:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:53.458 03:51:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:53.458 03:51:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:53.458 03:51:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:53.458 03:51:28 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:53.458 03:51:28 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:53.458 Cannot find device "nvmf_tgt_br" 00:07:53.458 03:51:28 -- nvmf/common.sh@154 -- # true 00:07:53.458 03:51:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:53.458 Cannot find device "nvmf_tgt_br2" 00:07:53.458 03:51:28 -- nvmf/common.sh@155 -- # true 00:07:53.458 03:51:28 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:53.458 03:51:28 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:53.458 Cannot find device "nvmf_tgt_br" 00:07:53.458 03:51:28 -- nvmf/common.sh@157 -- # true 00:07:53.458 03:51:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:53.458 Cannot find device "nvmf_tgt_br2" 00:07:53.458 03:51:28 -- nvmf/common.sh@158 -- # true 00:07:53.458 03:51:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:53.724 03:51:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:53.724 03:51:28 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:53.724 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:53.724 03:51:28 -- nvmf/common.sh@161 -- # true 00:07:53.724 03:51:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:53.724 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:53.724 03:51:28 -- nvmf/common.sh@162 -- # true 00:07:53.724 03:51:28 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:53.724 03:51:28 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:53.724 03:51:28 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:53.724 03:51:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:53.724 03:51:28 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:53.724 03:51:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:53.724 03:51:28 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:53.724 03:51:28 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:53.724 03:51:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:53.724 03:51:28 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:53.724 03:51:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:53.724 03:51:28 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:53.724 03:51:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:53.724 03:51:28 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:53.724 03:51:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:53.724 03:51:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:53.724 03:51:28 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:53.724 03:51:28 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:53.724 03:51:28 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:53.724 03:51:28 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:53.724 03:51:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:53.724 03:51:28 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:53.724 03:51:28 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:53.724 03:51:28 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:53.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:53.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:07:53.724 00:07:53.724 --- 10.0.0.2 ping statistics --- 00:07:53.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.724 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:07:53.724 03:51:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:53.724 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:53.724 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:07:53.724 00:07:53.724 --- 10.0.0.3 ping statistics --- 00:07:53.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.724 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:07:53.724 03:51:28 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:53.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:53.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:07:53.724 00:07:53.724 --- 10.0.0.1 ping statistics --- 00:07:53.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.724 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:07:53.724 03:51:28 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:53.724 03:51:28 -- nvmf/common.sh@421 -- # return 0 00:07:53.724 03:51:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:53.724 03:51:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:53.724 03:51:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:53.724 03:51:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:53.724 03:51:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:53.724 03:51:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:53.724 03:51:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:53.724 03:51:28 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:53.724 03:51:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:53.724 03:51:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:53.724 03:51:28 -- common/autotest_common.sh@10 -- # set +x 00:07:53.724 03:51:28 -- nvmf/common.sh@469 -- # nvmfpid=62066 00:07:53.724 03:51:28 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:53.724 03:51:28 -- nvmf/common.sh@470 -- # waitforlisten 62066 00:07:53.724 03:51:28 -- common/autotest_common.sh@829 -- # '[' -z 62066 ']' 00:07:53.984 03:51:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.984 03:51:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:53.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.984 03:51:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.984 03:51:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:53.984 03:51:28 -- common/autotest_common.sh@10 -- # set +x 00:07:53.984 [2024-11-08 03:51:28.873656] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:53.984 [2024-11-08 03:51:28.873730] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.984 [2024-11-08 03:51:29.004681] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:54.242 [2024-11-08 03:51:29.106745] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:54.242 [2024-11-08 03:51:29.106943] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:54.242 [2024-11-08 03:51:29.106955] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:54.242 [2024-11-08 03:51:29.106963] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:54.242 [2024-11-08 03:51:29.107111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.242 [2024-11-08 03:51:29.107488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.242 [2024-11-08 03:51:29.108149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:54.242 [2024-11-08 03:51:29.108198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.807 03:51:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:54.807 03:51:29 -- common/autotest_common.sh@862 -- # return 0 00:07:54.808 03:51:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:54.808 03:51:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:54.808 03:51:29 -- common/autotest_common.sh@10 -- # set +x 00:07:54.808 03:51:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:54.808 03:51:29 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:54.808 03:51:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.808 03:51:29 -- common/autotest_common.sh@10 -- # set +x 00:07:54.808 [2024-11-08 03:51:29.879902] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:54.808 03:51:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.808 03:51:29 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:54.808 03:51:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.808 03:51:29 -- common/autotest_common.sh@10 -- # set +x 00:07:55.065 03:51:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.065 03:51:29 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:55.065 03:51:29 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:55.065 03:51:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.065 03:51:29 -- common/autotest_common.sh@10 -- # set +x 00:07:55.065 03:51:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.065 03:51:29 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:55.065 03:51:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.065 03:51:29 -- common/autotest_common.sh@10 -- # set +x 00:07:55.065 03:51:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.065 03:51:29 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:55.065 03:51:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.065 03:51:29 -- common/autotest_common.sh@10 -- # set +x 00:07:55.065 [2024-11-08 03:51:29.944030] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:55.065 03:51:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.065 03:51:29 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:07:55.065 03:51:29 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:07:55.065 03:51:29 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:07:55.065 03:51:29 -- target/connect_disconnect.sh@34 -- # set +x 00:07:57.597 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:59.499 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:02.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:03.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:06.490 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:08.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:10.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:12.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:15.391 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:17.923 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:19.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:22.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:26.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.597 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.592 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.537 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.439 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.408 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.730 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.260 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.685 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.196 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.627 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.071 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.611 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.499 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.940 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.160 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.595 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.969 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.969 03:55:15 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:40.969 03:55:15 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:40.969 03:55:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:40.969 03:55:15 -- nvmf/common.sh@116 -- # sync 00:11:40.969 03:55:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:40.969 03:55:15 -- nvmf/common.sh@119 -- # set +e 00:11:40.969 03:55:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:40.969 03:55:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:40.969 rmmod nvme_tcp 00:11:40.969 rmmod nvme_fabrics 00:11:40.969 rmmod nvme_keyring 00:11:40.969 03:55:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:40.969 03:55:16 -- nvmf/common.sh@123 -- # set -e 00:11:40.969 03:55:16 -- nvmf/common.sh@124 -- # return 0 00:11:40.969 03:55:16 -- nvmf/common.sh@477 -- # '[' -n 62066 ']' 00:11:40.969 03:55:16 -- nvmf/common.sh@478 -- # killprocess 62066 00:11:40.969 03:55:16 -- common/autotest_common.sh@936 -- # '[' -z 62066 ']' 00:11:40.969 03:55:16 -- common/autotest_common.sh@940 -- # kill -0 62066 00:11:40.969 03:55:16 -- common/autotest_common.sh@941 -- # uname 00:11:40.969 03:55:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:40.969 03:55:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62066 00:11:41.228 03:55:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:41.228 03:55:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:41.228 killing process with pid 62066 00:11:41.228 03:55:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62066' 00:11:41.228 03:55:16 -- common/autotest_common.sh@955 -- # kill 62066 00:11:41.228 03:55:16 -- common/autotest_common.sh@960 -- # wait 62066 00:11:41.549 03:55:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:41.549 03:55:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:41.549 03:55:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:41.549 03:55:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:41.549 03:55:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:41.549 03:55:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.549 03:55:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:41.549 03:55:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.549 03:55:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:41.549 00:11:41.549 real 3m48.232s 00:11:41.549 user 14m53.421s 00:11:41.549 sys 0m18.253s 00:11:41.549 03:55:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:41.549 ************************************ 00:11:41.549 END TEST nvmf_connect_disconnect 00:11:41.549 ************************************ 00:11:41.549 03:55:16 -- common/autotest_common.sh@10 -- # set +x 00:11:41.549 03:55:16 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:41.549 03:55:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:41.549 03:55:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:41.549 03:55:16 -- common/autotest_common.sh@10 -- # set +x 00:11:41.549 ************************************ 00:11:41.549 START TEST nvmf_multitarget 00:11:41.549 ************************************ 00:11:41.549 03:55:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:41.549 * Looking for test storage... 00:11:41.549 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:41.549 03:55:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:41.549 03:55:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:41.549 03:55:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:41.809 03:55:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:41.809 03:55:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:41.809 03:55:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:41.809 03:55:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:41.809 03:55:16 -- scripts/common.sh@335 -- # IFS=.-: 00:11:41.809 03:55:16 -- scripts/common.sh@335 -- # read -ra ver1 00:11:41.809 03:55:16 -- scripts/common.sh@336 -- # IFS=.-: 00:11:41.809 03:55:16 -- scripts/common.sh@336 -- # read -ra ver2 00:11:41.809 03:55:16 -- scripts/common.sh@337 -- # local 'op=<' 00:11:41.809 03:55:16 -- scripts/common.sh@339 -- # ver1_l=2 00:11:41.809 03:55:16 -- scripts/common.sh@340 -- # ver2_l=1 00:11:41.809 03:55:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:41.809 03:55:16 -- scripts/common.sh@343 -- # case "$op" in 00:11:41.809 03:55:16 -- scripts/common.sh@344 -- # : 1 00:11:41.809 03:55:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:41.809 03:55:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:41.809 03:55:16 -- scripts/common.sh@364 -- # decimal 1 00:11:41.809 03:55:16 -- scripts/common.sh@352 -- # local d=1 00:11:41.809 03:55:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:41.809 03:55:16 -- scripts/common.sh@354 -- # echo 1 00:11:41.809 03:55:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:41.809 03:55:16 -- scripts/common.sh@365 -- # decimal 2 00:11:41.809 03:55:16 -- scripts/common.sh@352 -- # local d=2 00:11:41.809 03:55:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:41.809 03:55:16 -- scripts/common.sh@354 -- # echo 2 00:11:41.809 03:55:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:41.809 03:55:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:41.809 03:55:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:41.809 03:55:16 -- scripts/common.sh@367 -- # return 0 00:11:41.809 03:55:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:41.809 03:55:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:41.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.809 --rc genhtml_branch_coverage=1 00:11:41.809 --rc genhtml_function_coverage=1 00:11:41.809 --rc genhtml_legend=1 00:11:41.809 --rc geninfo_all_blocks=1 00:11:41.809 --rc geninfo_unexecuted_blocks=1 00:11:41.809 00:11:41.809 ' 00:11:41.809 03:55:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:41.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.809 --rc genhtml_branch_coverage=1 00:11:41.809 --rc genhtml_function_coverage=1 00:11:41.809 --rc genhtml_legend=1 00:11:41.809 --rc geninfo_all_blocks=1 00:11:41.809 --rc geninfo_unexecuted_blocks=1 00:11:41.809 00:11:41.809 ' 00:11:41.809 03:55:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:41.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.809 --rc genhtml_branch_coverage=1 00:11:41.809 --rc genhtml_function_coverage=1 00:11:41.809 --rc genhtml_legend=1 00:11:41.809 --rc geninfo_all_blocks=1 00:11:41.809 --rc geninfo_unexecuted_blocks=1 00:11:41.809 00:11:41.809 ' 00:11:41.809 03:55:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:41.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.809 --rc genhtml_branch_coverage=1 00:11:41.809 --rc genhtml_function_coverage=1 00:11:41.809 --rc genhtml_legend=1 00:11:41.809 --rc geninfo_all_blocks=1 00:11:41.809 --rc geninfo_unexecuted_blocks=1 00:11:41.809 00:11:41.809 ' 00:11:41.809 03:55:16 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:41.809 03:55:16 -- nvmf/common.sh@7 -- # uname -s 00:11:41.809 03:55:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:41.809 03:55:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:41.809 03:55:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:41.809 03:55:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:41.809 03:55:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:41.809 03:55:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:41.809 03:55:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:41.809 03:55:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:41.809 03:55:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:41.809 03:55:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:41.809 03:55:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:11:41.809 03:55:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:11:41.809 03:55:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:41.809 03:55:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:41.809 03:55:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:41.809 03:55:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:41.809 03:55:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:41.809 03:55:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:41.809 03:55:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:41.809 03:55:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.809 03:55:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.809 03:55:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.809 03:55:16 -- paths/export.sh@5 -- # export PATH 00:11:41.809 03:55:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.809 03:55:16 -- nvmf/common.sh@46 -- # : 0 00:11:41.809 03:55:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:41.809 03:55:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:41.809 03:55:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:41.809 03:55:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:41.809 03:55:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:41.809 03:55:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:41.809 03:55:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:41.809 03:55:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:41.809 03:55:16 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:11:41.809 03:55:16 -- target/multitarget.sh@15 -- # nvmftestinit 00:11:41.809 03:55:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:41.809 03:55:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:41.809 03:55:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:41.809 03:55:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:41.809 03:55:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:41.809 03:55:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.809 03:55:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:41.809 03:55:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.809 03:55:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:41.809 03:55:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:41.809 03:55:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:41.809 03:55:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:41.809 03:55:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:41.809 03:55:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:41.809 03:55:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:41.809 03:55:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:41.809 03:55:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:41.809 03:55:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:41.809 03:55:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:41.809 03:55:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:41.809 03:55:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:41.809 03:55:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:41.809 03:55:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:41.809 03:55:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:41.809 03:55:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:41.809 03:55:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:41.810 03:55:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:41.810 03:55:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:41.810 Cannot find device "nvmf_tgt_br" 00:11:41.810 03:55:16 -- nvmf/common.sh@154 -- # true 00:11:41.810 03:55:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:41.810 Cannot find device "nvmf_tgt_br2" 00:11:41.810 03:55:16 -- nvmf/common.sh@155 -- # true 00:11:41.810 03:55:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:41.810 03:55:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:41.810 Cannot find device "nvmf_tgt_br" 00:11:41.810 03:55:16 -- nvmf/common.sh@157 -- # true 00:11:41.810 03:55:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:41.810 Cannot find device "nvmf_tgt_br2" 00:11:41.810 03:55:16 -- nvmf/common.sh@158 -- # true 00:11:41.810 03:55:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:41.810 03:55:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:41.810 03:55:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:41.810 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:41.810 03:55:16 -- nvmf/common.sh@161 -- # true 00:11:41.810 03:55:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:41.810 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:41.810 03:55:16 -- nvmf/common.sh@162 -- # true 00:11:41.810 03:55:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:41.810 03:55:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:41.810 03:55:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:41.810 03:55:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:41.810 03:55:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:42.069 03:55:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:42.069 03:55:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:42.069 03:55:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:42.069 03:55:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:42.069 03:55:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:42.069 03:55:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:42.069 03:55:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:42.069 03:55:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:42.069 03:55:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:42.069 03:55:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:42.069 03:55:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:42.069 03:55:17 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:42.069 03:55:17 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:42.069 03:55:17 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:42.069 03:55:17 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:42.069 03:55:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:42.069 03:55:17 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:42.069 03:55:17 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:42.069 03:55:17 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:42.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:42.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:11:42.069 00:11:42.069 --- 10.0.0.2 ping statistics --- 00:11:42.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.069 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:11:42.069 03:55:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:42.069 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:42.069 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:11:42.069 00:11:42.069 --- 10.0.0.3 ping statistics --- 00:11:42.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.069 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:11:42.069 03:55:17 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:42.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:42.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:11:42.069 00:11:42.069 --- 10.0.0.1 ping statistics --- 00:11:42.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.069 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:11:42.069 03:55:17 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:42.069 03:55:17 -- nvmf/common.sh@421 -- # return 0 00:11:42.069 03:55:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:42.069 03:55:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:42.069 03:55:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:42.069 03:55:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:42.069 03:55:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:42.069 03:55:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:42.069 03:55:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:42.069 03:55:17 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:42.069 03:55:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:42.069 03:55:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:42.069 03:55:17 -- common/autotest_common.sh@10 -- # set +x 00:11:42.069 03:55:17 -- nvmf/common.sh@469 -- # nvmfpid=65866 00:11:42.069 03:55:17 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:42.069 03:55:17 -- nvmf/common.sh@470 -- # waitforlisten 65866 00:11:42.069 03:55:17 -- common/autotest_common.sh@829 -- # '[' -z 65866 ']' 00:11:42.069 03:55:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.069 03:55:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:42.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.069 03:55:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.069 03:55:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:42.069 03:55:17 -- common/autotest_common.sh@10 -- # set +x 00:11:42.069 [2024-11-08 03:55:17.173533] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:42.069 [2024-11-08 03:55:17.174198] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.329 [2024-11-08 03:55:17.319069] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:42.588 [2024-11-08 03:55:17.450070] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:42.588 [2024-11-08 03:55:17.450259] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:42.588 [2024-11-08 03:55:17.450276] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:42.588 [2024-11-08 03:55:17.450296] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:42.588 [2024-11-08 03:55:17.450849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.588 [2024-11-08 03:55:17.451043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:42.588 [2024-11-08 03:55:17.451702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:42.588 [2024-11-08 03:55:17.451790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.156 03:55:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:43.156 03:55:18 -- common/autotest_common.sh@862 -- # return 0 00:11:43.156 03:55:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:43.156 03:55:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:43.156 03:55:18 -- common/autotest_common.sh@10 -- # set +x 00:11:43.156 03:55:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.156 03:55:18 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:43.156 03:55:18 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:43.156 03:55:18 -- target/multitarget.sh@21 -- # jq length 00:11:43.415 03:55:18 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:43.415 03:55:18 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:43.674 "nvmf_tgt_1" 00:11:43.674 03:55:18 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:43.674 "nvmf_tgt_2" 00:11:43.674 03:55:18 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:43.674 03:55:18 -- target/multitarget.sh@28 -- # jq length 00:11:43.933 03:55:18 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:43.933 03:55:18 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:43.933 true 00:11:43.933 03:55:18 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:44.192 true 00:11:44.192 03:55:19 -- target/multitarget.sh@35 -- # jq length 00:11:44.192 03:55:19 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:44.192 03:55:19 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:44.192 03:55:19 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:44.192 03:55:19 -- target/multitarget.sh@41 -- # nvmftestfini 00:11:44.192 03:55:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:44.192 03:55:19 -- nvmf/common.sh@116 -- # sync 00:11:44.665 03:55:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:44.665 03:55:19 -- nvmf/common.sh@119 -- # set +e 00:11:44.665 03:55:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:44.665 03:55:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:44.665 rmmod nvme_tcp 00:11:44.665 rmmod nvme_fabrics 00:11:44.665 rmmod nvme_keyring 00:11:44.665 03:55:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:44.665 03:55:19 -- nvmf/common.sh@123 -- # set -e 00:11:44.665 03:55:19 -- nvmf/common.sh@124 -- # return 0 00:11:44.665 03:55:19 -- nvmf/common.sh@477 -- # '[' -n 65866 ']' 00:11:44.665 03:55:19 -- nvmf/common.sh@478 -- # killprocess 65866 00:11:44.665 03:55:19 -- common/autotest_common.sh@936 -- # '[' -z 65866 ']' 00:11:44.665 03:55:19 -- common/autotest_common.sh@940 -- # kill -0 65866 00:11:44.665 03:55:19 -- common/autotest_common.sh@941 -- # uname 00:11:44.665 03:55:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:44.665 03:55:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65866 00:11:44.665 03:55:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:44.665 03:55:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:44.665 killing process with pid 65866 00:11:44.665 03:55:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65866' 00:11:44.665 03:55:19 -- common/autotest_common.sh@955 -- # kill 65866 00:11:44.665 03:55:19 -- common/autotest_common.sh@960 -- # wait 65866 00:11:44.665 03:55:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:44.665 03:55:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:44.665 03:55:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:44.665 03:55:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:44.665 03:55:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:44.665 03:55:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.665 03:55:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:44.665 03:55:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.665 03:55:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:44.665 00:11:44.665 real 0m3.218s 00:11:44.665 user 0m10.306s 00:11:44.665 sys 0m0.777s 00:11:44.665 03:55:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:44.665 03:55:19 -- common/autotest_common.sh@10 -- # set +x 00:11:44.665 ************************************ 00:11:44.665 END TEST nvmf_multitarget 00:11:44.665 ************************************ 00:11:44.924 03:55:19 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:44.924 03:55:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:44.924 03:55:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:44.924 03:55:19 -- common/autotest_common.sh@10 -- # set +x 00:11:44.924 ************************************ 00:11:44.924 START TEST nvmf_rpc 00:11:44.924 ************************************ 00:11:44.924 03:55:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:44.924 * Looking for test storage... 00:11:44.924 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:44.924 03:55:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:44.924 03:55:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:44.924 03:55:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:44.924 03:55:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:44.924 03:55:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:44.924 03:55:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:44.924 03:55:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:44.924 03:55:19 -- scripts/common.sh@335 -- # IFS=.-: 00:11:44.924 03:55:19 -- scripts/common.sh@335 -- # read -ra ver1 00:11:44.924 03:55:19 -- scripts/common.sh@336 -- # IFS=.-: 00:11:44.924 03:55:19 -- scripts/common.sh@336 -- # read -ra ver2 00:11:44.924 03:55:19 -- scripts/common.sh@337 -- # local 'op=<' 00:11:44.924 03:55:19 -- scripts/common.sh@339 -- # ver1_l=2 00:11:44.924 03:55:19 -- scripts/common.sh@340 -- # ver2_l=1 00:11:44.924 03:55:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:44.924 03:55:19 -- scripts/common.sh@343 -- # case "$op" in 00:11:44.924 03:55:19 -- scripts/common.sh@344 -- # : 1 00:11:44.924 03:55:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:44.924 03:55:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:44.924 03:55:19 -- scripts/common.sh@364 -- # decimal 1 00:11:44.924 03:55:19 -- scripts/common.sh@352 -- # local d=1 00:11:44.924 03:55:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:44.924 03:55:19 -- scripts/common.sh@354 -- # echo 1 00:11:44.924 03:55:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:44.924 03:55:19 -- scripts/common.sh@365 -- # decimal 2 00:11:44.924 03:55:19 -- scripts/common.sh@352 -- # local d=2 00:11:44.924 03:55:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:44.924 03:55:19 -- scripts/common.sh@354 -- # echo 2 00:11:44.924 03:55:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:44.924 03:55:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:44.924 03:55:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:44.924 03:55:19 -- scripts/common.sh@367 -- # return 0 00:11:44.924 03:55:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:44.924 03:55:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:44.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.924 --rc genhtml_branch_coverage=1 00:11:44.924 --rc genhtml_function_coverage=1 00:11:44.924 --rc genhtml_legend=1 00:11:44.924 --rc geninfo_all_blocks=1 00:11:44.924 --rc geninfo_unexecuted_blocks=1 00:11:44.924 00:11:44.924 ' 00:11:44.924 03:55:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:44.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.924 --rc genhtml_branch_coverage=1 00:11:44.924 --rc genhtml_function_coverage=1 00:11:44.924 --rc genhtml_legend=1 00:11:44.924 --rc geninfo_all_blocks=1 00:11:44.924 --rc geninfo_unexecuted_blocks=1 00:11:44.924 00:11:44.924 ' 00:11:44.924 03:55:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:44.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.924 --rc genhtml_branch_coverage=1 00:11:44.924 --rc genhtml_function_coverage=1 00:11:44.924 --rc genhtml_legend=1 00:11:44.924 --rc geninfo_all_blocks=1 00:11:44.924 --rc geninfo_unexecuted_blocks=1 00:11:44.924 00:11:44.924 ' 00:11:44.924 03:55:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:44.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.924 --rc genhtml_branch_coverage=1 00:11:44.924 --rc genhtml_function_coverage=1 00:11:44.924 --rc genhtml_legend=1 00:11:44.924 --rc geninfo_all_blocks=1 00:11:44.924 --rc geninfo_unexecuted_blocks=1 00:11:44.924 00:11:44.924 ' 00:11:44.924 03:55:19 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:44.924 03:55:19 -- nvmf/common.sh@7 -- # uname -s 00:11:44.924 03:55:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:44.925 03:55:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:44.925 03:55:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:44.925 03:55:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:44.925 03:55:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:44.925 03:55:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:44.925 03:55:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:44.925 03:55:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:44.925 03:55:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:44.925 03:55:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:44.925 03:55:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:11:44.925 03:55:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:11:44.925 03:55:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:44.925 03:55:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:44.925 03:55:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:44.925 03:55:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:44.925 03:55:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:44.925 03:55:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:44.925 03:55:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:44.925 03:55:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.925 03:55:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.925 03:55:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.925 03:55:20 -- paths/export.sh@5 -- # export PATH 00:11:44.925 03:55:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.925 03:55:20 -- nvmf/common.sh@46 -- # : 0 00:11:44.925 03:55:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:44.925 03:55:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:44.925 03:55:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:44.925 03:55:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:44.925 03:55:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:44.925 03:55:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:44.925 03:55:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:44.925 03:55:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:44.925 03:55:20 -- target/rpc.sh@11 -- # loops=5 00:11:44.925 03:55:20 -- target/rpc.sh@23 -- # nvmftestinit 00:11:44.925 03:55:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:44.925 03:55:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:44.925 03:55:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:44.925 03:55:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:44.925 03:55:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:44.925 03:55:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.925 03:55:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:44.925 03:55:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.925 03:55:20 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:44.925 03:55:20 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:44.925 03:55:20 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:44.925 03:55:20 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:44.925 03:55:20 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:44.925 03:55:20 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:44.925 03:55:20 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:44.925 03:55:20 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:44.925 03:55:20 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:44.925 03:55:20 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:44.925 03:55:20 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:44.925 03:55:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:44.925 03:55:20 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:44.925 03:55:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:44.925 03:55:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:44.925 03:55:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:44.925 03:55:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:44.925 03:55:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:44.925 03:55:20 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:45.184 03:55:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:45.184 Cannot find device "nvmf_tgt_br" 00:11:45.184 03:55:20 -- nvmf/common.sh@154 -- # true 00:11:45.184 03:55:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:45.184 Cannot find device "nvmf_tgt_br2" 00:11:45.184 03:55:20 -- nvmf/common.sh@155 -- # true 00:11:45.184 03:55:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:45.184 03:55:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:45.184 Cannot find device "nvmf_tgt_br" 00:11:45.184 03:55:20 -- nvmf/common.sh@157 -- # true 00:11:45.184 03:55:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:45.184 Cannot find device "nvmf_tgt_br2" 00:11:45.184 03:55:20 -- nvmf/common.sh@158 -- # true 00:11:45.184 03:55:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:45.184 03:55:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:45.184 03:55:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:45.184 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:45.184 03:55:20 -- nvmf/common.sh@161 -- # true 00:11:45.184 03:55:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:45.184 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:45.184 03:55:20 -- nvmf/common.sh@162 -- # true 00:11:45.184 03:55:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:45.184 03:55:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:45.184 03:55:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:45.184 03:55:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:45.184 03:55:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:45.184 03:55:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:45.184 03:55:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:45.184 03:55:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:45.184 03:55:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:45.184 03:55:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:45.184 03:55:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:45.184 03:55:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:45.184 03:55:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:45.184 03:55:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:45.184 03:55:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:45.184 03:55:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:45.184 03:55:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:45.184 03:55:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:45.184 03:55:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:45.443 03:55:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:45.443 03:55:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:45.443 03:55:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:45.443 03:55:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:45.443 03:55:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:45.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:45.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:11:45.443 00:11:45.443 --- 10.0.0.2 ping statistics --- 00:11:45.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.443 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:11:45.443 03:55:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:45.443 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:45.443 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:11:45.443 00:11:45.443 --- 10.0.0.3 ping statistics --- 00:11:45.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.443 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:11:45.443 03:55:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:45.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:45.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:11:45.443 00:11:45.443 --- 10.0.0.1 ping statistics --- 00:11:45.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.443 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:11:45.443 03:55:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:45.443 03:55:20 -- nvmf/common.sh@421 -- # return 0 00:11:45.443 03:55:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:45.443 03:55:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:45.443 03:55:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:45.443 03:55:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:45.443 03:55:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:45.443 03:55:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:45.443 03:55:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:45.443 03:55:20 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:45.443 03:55:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:45.443 03:55:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:45.443 03:55:20 -- common/autotest_common.sh@10 -- # set +x 00:11:45.443 03:55:20 -- nvmf/common.sh@469 -- # nvmfpid=66114 00:11:45.443 03:55:20 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:45.443 03:55:20 -- nvmf/common.sh@470 -- # waitforlisten 66114 00:11:45.443 03:55:20 -- common/autotest_common.sh@829 -- # '[' -z 66114 ']' 00:11:45.443 03:55:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.443 03:55:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:45.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.443 03:55:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.443 03:55:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:45.443 03:55:20 -- common/autotest_common.sh@10 -- # set +x 00:11:45.443 [2024-11-08 03:55:20.445322] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:45.443 [2024-11-08 03:55:20.445968] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:45.701 [2024-11-08 03:55:20.587024] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:45.701 [2024-11-08 03:55:20.705359] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:45.701 [2024-11-08 03:55:20.705561] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:45.701 [2024-11-08 03:55:20.705579] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:45.701 [2024-11-08 03:55:20.705592] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:45.701 [2024-11-08 03:55:20.705769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:45.701 [2024-11-08 03:55:20.706251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:45.701 [2024-11-08 03:55:20.706370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:45.701 [2024-11-08 03:55:20.706394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.637 03:55:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:46.637 03:55:21 -- common/autotest_common.sh@862 -- # return 0 00:11:46.637 03:55:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:46.637 03:55:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:46.637 03:55:21 -- common/autotest_common.sh@10 -- # set +x 00:11:46.637 03:55:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:46.637 03:55:21 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:46.637 03:55:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.637 03:55:21 -- common/autotest_common.sh@10 -- # set +x 00:11:46.637 03:55:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.637 03:55:21 -- target/rpc.sh@26 -- # stats='{ 00:11:46.637 "poll_groups": [ 00:11:46.637 { 00:11:46.637 "admin_qpairs": 0, 00:11:46.637 "completed_nvme_io": 0, 00:11:46.637 "current_admin_qpairs": 0, 00:11:46.637 "current_io_qpairs": 0, 00:11:46.637 "io_qpairs": 0, 00:11:46.637 "name": "nvmf_tgt_poll_group_0", 00:11:46.637 "pending_bdev_io": 0, 00:11:46.637 "transports": [] 00:11:46.637 }, 00:11:46.637 { 00:11:46.637 "admin_qpairs": 0, 00:11:46.637 "completed_nvme_io": 0, 00:11:46.637 "current_admin_qpairs": 0, 00:11:46.637 "current_io_qpairs": 0, 00:11:46.637 "io_qpairs": 0, 00:11:46.637 "name": "nvmf_tgt_poll_group_1", 00:11:46.637 "pending_bdev_io": 0, 00:11:46.637 "transports": [] 00:11:46.637 }, 00:11:46.637 { 00:11:46.637 "admin_qpairs": 0, 00:11:46.637 "completed_nvme_io": 0, 00:11:46.637 "current_admin_qpairs": 0, 00:11:46.637 "current_io_qpairs": 0, 00:11:46.637 "io_qpairs": 0, 00:11:46.637 "name": "nvmf_tgt_poll_group_2", 00:11:46.637 "pending_bdev_io": 0, 00:11:46.637 "transports": [] 00:11:46.637 }, 00:11:46.637 { 00:11:46.637 "admin_qpairs": 0, 00:11:46.637 "completed_nvme_io": 0, 00:11:46.637 "current_admin_qpairs": 0, 00:11:46.637 "current_io_qpairs": 0, 00:11:46.637 "io_qpairs": 0, 00:11:46.637 "name": "nvmf_tgt_poll_group_3", 00:11:46.637 "pending_bdev_io": 0, 00:11:46.637 "transports": [] 00:11:46.637 } 00:11:46.637 ], 00:11:46.637 "tick_rate": 2200000000 00:11:46.637 }' 00:11:46.637 03:55:21 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:46.637 03:55:21 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:46.637 03:55:21 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:46.637 03:55:21 -- target/rpc.sh@15 -- # wc -l 00:11:46.637 03:55:21 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:46.637 03:55:21 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:46.637 03:55:21 -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:46.637 03:55:21 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:46.637 03:55:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.637 03:55:21 -- common/autotest_common.sh@10 -- # set +x 00:11:46.637 [2024-11-08 03:55:21.615325] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:46.637 03:55:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.637 03:55:21 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:46.637 03:55:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.637 03:55:21 -- common/autotest_common.sh@10 -- # set +x 00:11:46.637 03:55:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.637 03:55:21 -- target/rpc.sh@33 -- # stats='{ 00:11:46.637 "poll_groups": [ 00:11:46.637 { 00:11:46.637 "admin_qpairs": 0, 00:11:46.637 "completed_nvme_io": 0, 00:11:46.637 "current_admin_qpairs": 0, 00:11:46.637 "current_io_qpairs": 0, 00:11:46.637 "io_qpairs": 0, 00:11:46.637 "name": "nvmf_tgt_poll_group_0", 00:11:46.637 "pending_bdev_io": 0, 00:11:46.637 "transports": [ 00:11:46.637 { 00:11:46.637 "trtype": "TCP" 00:11:46.637 } 00:11:46.637 ] 00:11:46.637 }, 00:11:46.637 { 00:11:46.637 "admin_qpairs": 0, 00:11:46.637 "completed_nvme_io": 0, 00:11:46.637 "current_admin_qpairs": 0, 00:11:46.637 "current_io_qpairs": 0, 00:11:46.637 "io_qpairs": 0, 00:11:46.637 "name": "nvmf_tgt_poll_group_1", 00:11:46.637 "pending_bdev_io": 0, 00:11:46.637 "transports": [ 00:11:46.637 { 00:11:46.637 "trtype": "TCP" 00:11:46.637 } 00:11:46.637 ] 00:11:46.637 }, 00:11:46.637 { 00:11:46.637 "admin_qpairs": 0, 00:11:46.637 "completed_nvme_io": 0, 00:11:46.637 "current_admin_qpairs": 0, 00:11:46.637 "current_io_qpairs": 0, 00:11:46.637 "io_qpairs": 0, 00:11:46.637 "name": "nvmf_tgt_poll_group_2", 00:11:46.637 "pending_bdev_io": 0, 00:11:46.637 "transports": [ 00:11:46.637 { 00:11:46.637 "trtype": "TCP" 00:11:46.637 } 00:11:46.637 ] 00:11:46.637 }, 00:11:46.637 { 00:11:46.637 "admin_qpairs": 0, 00:11:46.637 "completed_nvme_io": 0, 00:11:46.637 "current_admin_qpairs": 0, 00:11:46.637 "current_io_qpairs": 0, 00:11:46.637 "io_qpairs": 0, 00:11:46.637 "name": "nvmf_tgt_poll_group_3", 00:11:46.637 "pending_bdev_io": 0, 00:11:46.637 "transports": [ 00:11:46.637 { 00:11:46.637 "trtype": "TCP" 00:11:46.637 } 00:11:46.637 ] 00:11:46.637 } 00:11:46.637 ], 00:11:46.637 "tick_rate": 2200000000 00:11:46.637 }' 00:11:46.637 03:55:21 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:46.637 03:55:21 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:46.637 03:55:21 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:46.637 03:55:21 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:46.637 03:55:21 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:46.637 03:55:21 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:46.637 03:55:21 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:46.637 03:55:21 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:46.637 03:55:21 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:46.896 03:55:21 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:46.896 03:55:21 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:46.896 03:55:21 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:46.896 03:55:21 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:46.896 03:55:21 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:46.896 03:55:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.896 03:55:21 -- common/autotest_common.sh@10 -- # set +x 00:11:46.896 Malloc1 00:11:46.896 03:55:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.896 03:55:21 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:46.896 03:55:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.896 03:55:21 -- common/autotest_common.sh@10 -- # set +x 00:11:46.896 03:55:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.896 03:55:21 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:46.896 03:55:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.896 03:55:21 -- common/autotest_common.sh@10 -- # set +x 00:11:46.897 03:55:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.897 03:55:21 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:46.897 03:55:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.897 03:55:21 -- common/autotest_common.sh@10 -- # set +x 00:11:46.897 03:55:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.897 03:55:21 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.897 03:55:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.897 03:55:21 -- common/autotest_common.sh@10 -- # set +x 00:11:46.897 [2024-11-08 03:55:21.828352] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.897 03:55:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.897 03:55:21 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -a 10.0.0.2 -s 4420 00:11:46.897 03:55:21 -- common/autotest_common.sh@650 -- # local es=0 00:11:46.897 03:55:21 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -a 10.0.0.2 -s 4420 00:11:46.897 03:55:21 -- common/autotest_common.sh@638 -- # local arg=nvme 00:11:46.897 03:55:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:46.897 03:55:21 -- common/autotest_common.sh@642 -- # type -t nvme 00:11:46.897 03:55:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:46.897 03:55:21 -- common/autotest_common.sh@644 -- # type -P nvme 00:11:46.897 03:55:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:46.897 03:55:21 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:11:46.897 03:55:21 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:11:46.897 03:55:21 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -a 10.0.0.2 -s 4420 00:11:46.897 [2024-11-08 03:55:21.856793] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01' 00:11:46.897 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:46.897 could not add new controller: failed to write to nvme-fabrics device 00:11:46.897 03:55:21 -- common/autotest_common.sh@653 -- # es=1 00:11:46.897 03:55:21 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:46.897 03:55:21 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:46.897 03:55:21 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:46.897 03:55:21 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:11:46.897 03:55:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.897 03:55:21 -- common/autotest_common.sh@10 -- # set +x 00:11:46.897 03:55:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.897 03:55:21 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:47.155 03:55:22 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:47.155 03:55:22 -- common/autotest_common.sh@1187 -- # local i=0 00:11:47.155 03:55:22 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:47.155 03:55:22 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:47.155 03:55:22 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:49.062 03:55:24 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:49.062 03:55:24 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:49.062 03:55:24 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:11:49.062 03:55:24 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:49.062 03:55:24 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:49.062 03:55:24 -- common/autotest_common.sh@1197 -- # return 0 00:11:49.062 03:55:24 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:49.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.062 03:55:24 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:49.062 03:55:24 -- common/autotest_common.sh@1208 -- # local i=0 00:11:49.062 03:55:24 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:49.062 03:55:24 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.062 03:55:24 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:49.062 03:55:24 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.062 03:55:24 -- common/autotest_common.sh@1220 -- # return 0 00:11:49.062 03:55:24 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:11:49.062 03:55:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.062 03:55:24 -- common/autotest_common.sh@10 -- # set +x 00:11:49.062 03:55:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.062 03:55:24 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:49.062 03:55:24 -- common/autotest_common.sh@650 -- # local es=0 00:11:49.062 03:55:24 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:49.062 03:55:24 -- common/autotest_common.sh@638 -- # local arg=nvme 00:11:49.062 03:55:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:49.062 03:55:24 -- common/autotest_common.sh@642 -- # type -t nvme 00:11:49.062 03:55:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:49.062 03:55:24 -- common/autotest_common.sh@644 -- # type -P nvme 00:11:49.062 03:55:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:49.062 03:55:24 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:11:49.062 03:55:24 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:11:49.062 03:55:24 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:49.062 [2024-11-08 03:55:24.168213] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01' 00:11:49.321 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:49.321 could not add new controller: failed to write to nvme-fabrics device 00:11:49.321 03:55:24 -- common/autotest_common.sh@653 -- # es=1 00:11:49.321 03:55:24 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:49.321 03:55:24 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:49.321 03:55:24 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:49.321 03:55:24 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:49.321 03:55:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.321 03:55:24 -- common/autotest_common.sh@10 -- # set +x 00:11:49.321 03:55:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.321 03:55:24 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:49.321 03:55:24 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:49.321 03:55:24 -- common/autotest_common.sh@1187 -- # local i=0 00:11:49.321 03:55:24 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:49.321 03:55:24 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:49.321 03:55:24 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:51.854 03:55:26 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:51.854 03:55:26 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:51.854 03:55:26 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:11:51.854 03:55:26 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:51.854 03:55:26 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:51.854 03:55:26 -- common/autotest_common.sh@1197 -- # return 0 00:11:51.854 03:55:26 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:51.854 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.854 03:55:26 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:51.854 03:55:26 -- common/autotest_common.sh@1208 -- # local i=0 00:11:51.854 03:55:26 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:51.854 03:55:26 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:51.854 03:55:26 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:51.854 03:55:26 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:51.854 03:55:26 -- common/autotest_common.sh@1220 -- # return 0 00:11:51.854 03:55:26 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:51.854 03:55:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.854 03:55:26 -- common/autotest_common.sh@10 -- # set +x 00:11:51.854 03:55:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.854 03:55:26 -- target/rpc.sh@81 -- # seq 1 5 00:11:51.854 03:55:26 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:51.854 03:55:26 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:51.854 03:55:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.854 03:55:26 -- common/autotest_common.sh@10 -- # set +x 00:11:51.854 03:55:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.854 03:55:26 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:51.854 03:55:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.854 03:55:26 -- common/autotest_common.sh@10 -- # set +x 00:11:51.854 [2024-11-08 03:55:26.470181] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:51.854 03:55:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.854 03:55:26 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:51.854 03:55:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.854 03:55:26 -- common/autotest_common.sh@10 -- # set +x 00:11:51.854 03:55:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.854 03:55:26 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:51.854 03:55:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.854 03:55:26 -- common/autotest_common.sh@10 -- # set +x 00:11:51.854 03:55:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.854 03:55:26 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:51.854 03:55:26 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:51.854 03:55:26 -- common/autotest_common.sh@1187 -- # local i=0 00:11:51.854 03:55:26 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:51.854 03:55:26 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:51.854 03:55:26 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:53.801 03:55:28 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:53.801 03:55:28 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:53.801 03:55:28 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:11:53.801 03:55:28 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:53.801 03:55:28 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:53.801 03:55:28 -- common/autotest_common.sh@1197 -- # return 0 00:11:53.801 03:55:28 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:53.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.801 03:55:28 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:53.801 03:55:28 -- common/autotest_common.sh@1208 -- # local i=0 00:11:53.801 03:55:28 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:53.801 03:55:28 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.801 03:55:28 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.801 03:55:28 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:53.801 03:55:28 -- common/autotest_common.sh@1220 -- # return 0 00:11:53.801 03:55:28 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:53.801 03:55:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.801 03:55:28 -- common/autotest_common.sh@10 -- # set +x 00:11:53.801 03:55:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.801 03:55:28 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:53.801 03:55:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.801 03:55:28 -- common/autotest_common.sh@10 -- # set +x 00:11:53.801 03:55:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.801 03:55:28 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:53.801 03:55:28 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:53.801 03:55:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.801 03:55:28 -- common/autotest_common.sh@10 -- # set +x 00:11:53.801 03:55:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.801 03:55:28 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:53.801 03:55:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.801 03:55:28 -- common/autotest_common.sh@10 -- # set +x 00:11:53.801 [2024-11-08 03:55:28.782672] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.801 03:55:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.801 03:55:28 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:53.801 03:55:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.801 03:55:28 -- common/autotest_common.sh@10 -- # set +x 00:11:53.801 03:55:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.801 03:55:28 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:53.801 03:55:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.801 03:55:28 -- common/autotest_common.sh@10 -- # set +x 00:11:53.801 03:55:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.801 03:55:28 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:54.059 03:55:28 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:54.060 03:55:28 -- common/autotest_common.sh@1187 -- # local i=0 00:11:54.060 03:55:28 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:54.060 03:55:28 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:54.060 03:55:28 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:55.962 03:55:30 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:55.962 03:55:30 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:55.962 03:55:30 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:11:55.962 03:55:30 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:55.962 03:55:30 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:55.962 03:55:30 -- common/autotest_common.sh@1197 -- # return 0 00:11:55.962 03:55:30 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:56.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.221 03:55:31 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:56.221 03:55:31 -- common/autotest_common.sh@1208 -- # local i=0 00:11:56.221 03:55:31 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:56.221 03:55:31 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.221 03:55:31 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:56.221 03:55:31 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.221 03:55:31 -- common/autotest_common.sh@1220 -- # return 0 00:11:56.221 03:55:31 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:56.221 03:55:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.221 03:55:31 -- common/autotest_common.sh@10 -- # set +x 00:11:56.221 03:55:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.221 03:55:31 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:56.221 03:55:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.221 03:55:31 -- common/autotest_common.sh@10 -- # set +x 00:11:56.221 03:55:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.221 03:55:31 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:56.221 03:55:31 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:56.221 03:55:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.221 03:55:31 -- common/autotest_common.sh@10 -- # set +x 00:11:56.221 03:55:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.221 03:55:31 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:56.221 03:55:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.221 03:55:31 -- common/autotest_common.sh@10 -- # set +x 00:11:56.221 [2024-11-08 03:55:31.199765] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.221 03:55:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.221 03:55:31 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:56.221 03:55:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.221 03:55:31 -- common/autotest_common.sh@10 -- # set +x 00:11:56.221 03:55:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.221 03:55:31 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:56.221 03:55:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.221 03:55:31 -- common/autotest_common.sh@10 -- # set +x 00:11:56.221 03:55:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.221 03:55:31 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:56.480 03:55:31 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:56.480 03:55:31 -- common/autotest_common.sh@1187 -- # local i=0 00:11:56.480 03:55:31 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:56.480 03:55:31 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:56.480 03:55:31 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:58.379 03:55:33 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:58.379 03:55:33 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:58.379 03:55:33 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:11:58.379 03:55:33 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:58.379 03:55:33 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:58.379 03:55:33 -- common/autotest_common.sh@1197 -- # return 0 00:11:58.379 03:55:33 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:58.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.379 03:55:33 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:58.379 03:55:33 -- common/autotest_common.sh@1208 -- # local i=0 00:11:58.379 03:55:33 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:58.379 03:55:33 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:58.379 03:55:33 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:58.379 03:55:33 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:58.379 03:55:33 -- common/autotest_common.sh@1220 -- # return 0 00:11:58.379 03:55:33 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:58.379 03:55:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.379 03:55:33 -- common/autotest_common.sh@10 -- # set +x 00:11:58.638 03:55:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.638 03:55:33 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:58.638 03:55:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.638 03:55:33 -- common/autotest_common.sh@10 -- # set +x 00:11:58.638 03:55:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.638 03:55:33 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:58.638 03:55:33 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:58.638 03:55:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.638 03:55:33 -- common/autotest_common.sh@10 -- # set +x 00:11:58.638 03:55:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.638 03:55:33 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:58.638 03:55:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.638 03:55:33 -- common/autotest_common.sh@10 -- # set +x 00:11:58.638 [2024-11-08 03:55:33.512322] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:58.638 03:55:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.638 03:55:33 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:58.638 03:55:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.638 03:55:33 -- common/autotest_common.sh@10 -- # set +x 00:11:58.638 03:55:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.638 03:55:33 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:58.638 03:55:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.638 03:55:33 -- common/autotest_common.sh@10 -- # set +x 00:11:58.638 03:55:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.638 03:55:33 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:58.638 03:55:33 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:58.638 03:55:33 -- common/autotest_common.sh@1187 -- # local i=0 00:11:58.638 03:55:33 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:58.638 03:55:33 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:58.638 03:55:33 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:01.171 03:55:35 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:01.171 03:55:35 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:01.171 03:55:35 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:01.171 03:55:35 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:01.171 03:55:35 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:01.171 03:55:35 -- common/autotest_common.sh@1197 -- # return 0 00:12:01.171 03:55:35 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:01.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.171 03:55:35 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:01.171 03:55:35 -- common/autotest_common.sh@1208 -- # local i=0 00:12:01.171 03:55:35 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:01.171 03:55:35 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:01.171 03:55:35 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:01.171 03:55:35 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:01.171 03:55:35 -- common/autotest_common.sh@1220 -- # return 0 00:12:01.171 03:55:35 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:01.171 03:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.171 03:55:35 -- common/autotest_common.sh@10 -- # set +x 00:12:01.171 03:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.171 03:55:35 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:01.171 03:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.171 03:55:35 -- common/autotest_common.sh@10 -- # set +x 00:12:01.171 03:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.171 03:55:35 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:01.171 03:55:35 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:01.171 03:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.171 03:55:35 -- common/autotest_common.sh@10 -- # set +x 00:12:01.171 03:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.171 03:55:35 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.171 03:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.171 03:55:35 -- common/autotest_common.sh@10 -- # set +x 00:12:01.171 [2024-11-08 03:55:35.840972] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.171 03:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.171 03:55:35 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:01.171 03:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.171 03:55:35 -- common/autotest_common.sh@10 -- # set +x 00:12:01.171 03:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.171 03:55:35 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:01.171 03:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.171 03:55:35 -- common/autotest_common.sh@10 -- # set +x 00:12:01.171 03:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.171 03:55:35 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:01.171 03:55:36 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:01.171 03:55:36 -- common/autotest_common.sh@1187 -- # local i=0 00:12:01.171 03:55:36 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:01.171 03:55:36 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:01.171 03:55:36 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:03.074 03:55:38 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:03.074 03:55:38 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:03.074 03:55:38 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:03.074 03:55:38 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:03.074 03:55:38 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:03.074 03:55:38 -- common/autotest_common.sh@1197 -- # return 0 00:12:03.074 03:55:38 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:03.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.074 03:55:38 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:03.074 03:55:38 -- common/autotest_common.sh@1208 -- # local i=0 00:12:03.074 03:55:38 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:03.074 03:55:38 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:03.074 03:55:38 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:03.074 03:55:38 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:03.074 03:55:38 -- common/autotest_common.sh@1220 -- # return 0 00:12:03.074 03:55:38 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:03.074 03:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.074 03:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:03.074 03:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.074 03:55:38 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:03.074 03:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.074 03:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:03.074 03:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.074 03:55:38 -- target/rpc.sh@99 -- # seq 1 5 00:12:03.074 03:55:38 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:03.074 03:55:38 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:03.074 03:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.074 03:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:03.074 03:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.074 03:55:38 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:03.074 03:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.074 03:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:03.074 [2024-11-08 03:55:38.158671] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:03.074 03:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.074 03:55:38 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:03.074 03:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.074 03:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:03.074 03:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.074 03:55:38 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:03.074 03:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.074 03:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:03.074 03:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.074 03:55:38 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:03.074 03:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.074 03:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:03.333 03:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.333 03:55:38 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:03.333 03:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.333 03:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:03.333 03:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.333 03:55:38 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:03.333 03:55:38 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:03.333 03:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.333 03:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:03.333 03:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.333 03:55:38 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:03.333 03:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.333 03:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:03.333 [2024-11-08 03:55:38.206733] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:03.333 03:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.333 03:55:38 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:03.333 03:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.333 03:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:03.334 03:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.334 03:55:38 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:03.334 03:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.334 03:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:03.334 03:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.334 03:55:38 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:03.334 03:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.334 03:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:03.334 03:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.334 03:55:38 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:03.334 03:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.334 03:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:03.334 03:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.334 03:55:38 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:03.334 03:55:38 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:03.334 03:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.334 03:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:03.334 03:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.334 03:55:38 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:03.334 03:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.334 03:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:03.334 [2024-11-08 03:55:38.254818] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:03.334 03:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.334 03:55:38 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:03.334 03:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.334 03:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:03.334 03:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.334 03:55:38 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:03.334 03:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.334 03:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:03.334 03:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.334 03:55:38 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:03.334 03:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.334 03:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:03.334 03:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.334 03:55:38 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:03.334 03:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.334 03:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:03.334 03:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.334 03:55:38 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:03.334 03:55:38 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:03.334 03:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.334 03:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:03.334 03:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.334 03:55:38 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:03.334 03:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.334 03:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:03.334 [2024-11-08 03:55:38.302878] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:03.334 03:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.334 03:55:38 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:03.334 03:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.334 03:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:03.334 03:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.334 03:55:38 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:03.334 03:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.334 03:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:03.334 03:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.334 03:55:38 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:03.334 03:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.334 03:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:03.334 03:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.334 03:55:38 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:03.334 03:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.334 03:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:03.334 03:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.334 03:55:38 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:03.334 03:55:38 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:03.334 03:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.334 03:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:03.334 03:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.334 03:55:38 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:03.334 03:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.334 03:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:03.334 [2024-11-08 03:55:38.350930] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:03.334 03:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.334 03:55:38 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:03.334 03:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.334 03:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:03.334 03:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.334 03:55:38 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:03.334 03:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.334 03:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:03.334 03:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.334 03:55:38 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:03.334 03:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.334 03:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:03.334 03:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.334 03:55:38 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:03.334 03:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.334 03:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:03.334 03:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.334 03:55:38 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:03.334 03:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.334 03:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:03.334 03:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.334 03:55:38 -- target/rpc.sh@110 -- # stats='{ 00:12:03.334 "poll_groups": [ 00:12:03.334 { 00:12:03.334 "admin_qpairs": 2, 00:12:03.334 "completed_nvme_io": 115, 00:12:03.334 "current_admin_qpairs": 0, 00:12:03.334 "current_io_qpairs": 0, 00:12:03.334 "io_qpairs": 16, 00:12:03.334 "name": "nvmf_tgt_poll_group_0", 00:12:03.334 "pending_bdev_io": 0, 00:12:03.334 "transports": [ 00:12:03.334 { 00:12:03.334 "trtype": "TCP" 00:12:03.334 } 00:12:03.334 ] 00:12:03.334 }, 00:12:03.334 { 00:12:03.334 "admin_qpairs": 3, 00:12:03.334 "completed_nvme_io": 69, 00:12:03.334 "current_admin_qpairs": 0, 00:12:03.334 "current_io_qpairs": 0, 00:12:03.334 "io_qpairs": 17, 00:12:03.334 "name": "nvmf_tgt_poll_group_1", 00:12:03.334 "pending_bdev_io": 0, 00:12:03.334 "transports": [ 00:12:03.334 { 00:12:03.334 "trtype": "TCP" 00:12:03.334 } 00:12:03.334 ] 00:12:03.334 }, 00:12:03.334 { 00:12:03.334 "admin_qpairs": 1, 00:12:03.334 "completed_nvme_io": 120, 00:12:03.334 "current_admin_qpairs": 0, 00:12:03.334 "current_io_qpairs": 0, 00:12:03.334 "io_qpairs": 19, 00:12:03.334 "name": "nvmf_tgt_poll_group_2", 00:12:03.334 "pending_bdev_io": 0, 00:12:03.334 "transports": [ 00:12:03.334 { 00:12:03.334 "trtype": "TCP" 00:12:03.334 } 00:12:03.334 ] 00:12:03.334 }, 00:12:03.334 { 00:12:03.334 "admin_qpairs": 1, 00:12:03.334 "completed_nvme_io": 116, 00:12:03.334 "current_admin_qpairs": 0, 00:12:03.334 "current_io_qpairs": 0, 00:12:03.334 "io_qpairs": 18, 00:12:03.334 "name": "nvmf_tgt_poll_group_3", 00:12:03.334 "pending_bdev_io": 0, 00:12:03.334 "transports": [ 00:12:03.334 { 00:12:03.334 "trtype": "TCP" 00:12:03.334 } 00:12:03.334 ] 00:12:03.334 } 00:12:03.334 ], 00:12:03.334 "tick_rate": 2200000000 00:12:03.334 }' 00:12:03.334 03:55:38 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:03.334 03:55:38 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:03.334 03:55:38 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:03.334 03:55:38 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:03.593 03:55:38 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:03.593 03:55:38 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:03.593 03:55:38 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:03.593 03:55:38 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:03.593 03:55:38 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:03.593 03:55:38 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:12:03.593 03:55:38 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:03.593 03:55:38 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:03.593 03:55:38 -- target/rpc.sh@123 -- # nvmftestfini 00:12:03.593 03:55:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:03.593 03:55:38 -- nvmf/common.sh@116 -- # sync 00:12:03.593 03:55:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:03.593 03:55:38 -- nvmf/common.sh@119 -- # set +e 00:12:03.593 03:55:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:03.593 03:55:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:03.593 rmmod nvme_tcp 00:12:03.593 rmmod nvme_fabrics 00:12:03.593 rmmod nvme_keyring 00:12:03.593 03:55:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:03.593 03:55:38 -- nvmf/common.sh@123 -- # set -e 00:12:03.593 03:55:38 -- nvmf/common.sh@124 -- # return 0 00:12:03.593 03:55:38 -- nvmf/common.sh@477 -- # '[' -n 66114 ']' 00:12:03.593 03:55:38 -- nvmf/common.sh@478 -- # killprocess 66114 00:12:03.593 03:55:38 -- common/autotest_common.sh@936 -- # '[' -z 66114 ']' 00:12:03.593 03:55:38 -- common/autotest_common.sh@940 -- # kill -0 66114 00:12:03.593 03:55:38 -- common/autotest_common.sh@941 -- # uname 00:12:03.593 03:55:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:03.593 03:55:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66114 00:12:03.593 killing process with pid 66114 00:12:03.593 03:55:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:03.593 03:55:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:03.593 03:55:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66114' 00:12:03.593 03:55:38 -- common/autotest_common.sh@955 -- # kill 66114 00:12:03.593 03:55:38 -- common/autotest_common.sh@960 -- # wait 66114 00:12:04.161 03:55:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:04.161 03:55:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:04.161 03:55:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:04.161 03:55:38 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:04.161 03:55:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:04.161 03:55:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.161 03:55:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:04.161 03:55:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.161 03:55:39 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:04.161 00:12:04.161 real 0m19.204s 00:12:04.161 user 1m12.520s 00:12:04.161 sys 0m2.070s 00:12:04.161 03:55:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:04.161 ************************************ 00:12:04.161 END TEST nvmf_rpc 00:12:04.161 ************************************ 00:12:04.161 03:55:39 -- common/autotest_common.sh@10 -- # set +x 00:12:04.161 03:55:39 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:04.161 03:55:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:04.161 03:55:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:04.161 03:55:39 -- common/autotest_common.sh@10 -- # set +x 00:12:04.161 ************************************ 00:12:04.161 START TEST nvmf_invalid 00:12:04.161 ************************************ 00:12:04.161 03:55:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:04.161 * Looking for test storage... 00:12:04.161 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:04.161 03:55:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:04.161 03:55:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:04.161 03:55:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:04.161 03:55:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:04.161 03:55:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:04.161 03:55:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:04.161 03:55:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:04.161 03:55:39 -- scripts/common.sh@335 -- # IFS=.-: 00:12:04.161 03:55:39 -- scripts/common.sh@335 -- # read -ra ver1 00:12:04.161 03:55:39 -- scripts/common.sh@336 -- # IFS=.-: 00:12:04.161 03:55:39 -- scripts/common.sh@336 -- # read -ra ver2 00:12:04.161 03:55:39 -- scripts/common.sh@337 -- # local 'op=<' 00:12:04.161 03:55:39 -- scripts/common.sh@339 -- # ver1_l=2 00:12:04.161 03:55:39 -- scripts/common.sh@340 -- # ver2_l=1 00:12:04.161 03:55:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:04.161 03:55:39 -- scripts/common.sh@343 -- # case "$op" in 00:12:04.161 03:55:39 -- scripts/common.sh@344 -- # : 1 00:12:04.161 03:55:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:04.161 03:55:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:04.161 03:55:39 -- scripts/common.sh@364 -- # decimal 1 00:12:04.161 03:55:39 -- scripts/common.sh@352 -- # local d=1 00:12:04.161 03:55:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:04.161 03:55:39 -- scripts/common.sh@354 -- # echo 1 00:12:04.161 03:55:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:04.161 03:55:39 -- scripts/common.sh@365 -- # decimal 2 00:12:04.161 03:55:39 -- scripts/common.sh@352 -- # local d=2 00:12:04.161 03:55:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:04.161 03:55:39 -- scripts/common.sh@354 -- # echo 2 00:12:04.161 03:55:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:04.161 03:55:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:04.161 03:55:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:04.161 03:55:39 -- scripts/common.sh@367 -- # return 0 00:12:04.161 03:55:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:04.161 03:55:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:04.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.161 --rc genhtml_branch_coverage=1 00:12:04.161 --rc genhtml_function_coverage=1 00:12:04.161 --rc genhtml_legend=1 00:12:04.161 --rc geninfo_all_blocks=1 00:12:04.161 --rc geninfo_unexecuted_blocks=1 00:12:04.161 00:12:04.161 ' 00:12:04.161 03:55:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:04.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.161 --rc genhtml_branch_coverage=1 00:12:04.161 --rc genhtml_function_coverage=1 00:12:04.161 --rc genhtml_legend=1 00:12:04.161 --rc geninfo_all_blocks=1 00:12:04.161 --rc geninfo_unexecuted_blocks=1 00:12:04.161 00:12:04.161 ' 00:12:04.161 03:55:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:04.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.161 --rc genhtml_branch_coverage=1 00:12:04.161 --rc genhtml_function_coverage=1 00:12:04.161 --rc genhtml_legend=1 00:12:04.161 --rc geninfo_all_blocks=1 00:12:04.161 --rc geninfo_unexecuted_blocks=1 00:12:04.161 00:12:04.161 ' 00:12:04.161 03:55:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:04.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.161 --rc genhtml_branch_coverage=1 00:12:04.162 --rc genhtml_function_coverage=1 00:12:04.162 --rc genhtml_legend=1 00:12:04.162 --rc geninfo_all_blocks=1 00:12:04.162 --rc geninfo_unexecuted_blocks=1 00:12:04.162 00:12:04.162 ' 00:12:04.162 03:55:39 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:04.162 03:55:39 -- nvmf/common.sh@7 -- # uname -s 00:12:04.162 03:55:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:04.162 03:55:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:04.162 03:55:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:04.162 03:55:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:04.162 03:55:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:04.162 03:55:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:04.162 03:55:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:04.162 03:55:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:04.162 03:55:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:04.162 03:55:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:04.162 03:55:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:12:04.162 03:55:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:12:04.162 03:55:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:04.162 03:55:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:04.162 03:55:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:04.162 03:55:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:04.162 03:55:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:04.162 03:55:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.420 03:55:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.420 03:55:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.420 03:55:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.420 03:55:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.420 03:55:39 -- paths/export.sh@5 -- # export PATH 00:12:04.420 03:55:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.420 03:55:39 -- nvmf/common.sh@46 -- # : 0 00:12:04.420 03:55:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:04.420 03:55:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:04.420 03:55:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:04.420 03:55:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:04.421 03:55:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:04.421 03:55:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:04.421 03:55:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:04.421 03:55:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:04.421 03:55:39 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:04.421 03:55:39 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:04.421 03:55:39 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:04.421 03:55:39 -- target/invalid.sh@14 -- # target=foobar 00:12:04.421 03:55:39 -- target/invalid.sh@16 -- # RANDOM=0 00:12:04.421 03:55:39 -- target/invalid.sh@34 -- # nvmftestinit 00:12:04.421 03:55:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:04.421 03:55:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:04.421 03:55:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:04.421 03:55:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:04.421 03:55:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:04.421 03:55:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.421 03:55:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:04.421 03:55:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.421 03:55:39 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:04.421 03:55:39 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:04.421 03:55:39 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:04.421 03:55:39 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:04.421 03:55:39 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:04.421 03:55:39 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:04.421 03:55:39 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:04.421 03:55:39 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:04.421 03:55:39 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:04.421 03:55:39 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:04.421 03:55:39 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:04.421 03:55:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:04.421 03:55:39 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:04.421 03:55:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:04.421 03:55:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:04.421 03:55:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:04.421 03:55:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:04.421 03:55:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:04.421 03:55:39 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:04.421 03:55:39 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:04.421 Cannot find device "nvmf_tgt_br" 00:12:04.421 03:55:39 -- nvmf/common.sh@154 -- # true 00:12:04.421 03:55:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:04.421 Cannot find device "nvmf_tgt_br2" 00:12:04.421 03:55:39 -- nvmf/common.sh@155 -- # true 00:12:04.421 03:55:39 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:04.421 03:55:39 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:04.421 Cannot find device "nvmf_tgt_br" 00:12:04.421 03:55:39 -- nvmf/common.sh@157 -- # true 00:12:04.421 03:55:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:04.421 Cannot find device "nvmf_tgt_br2" 00:12:04.421 03:55:39 -- nvmf/common.sh@158 -- # true 00:12:04.421 03:55:39 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:04.421 03:55:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:04.421 03:55:39 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:04.421 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:04.421 03:55:39 -- nvmf/common.sh@161 -- # true 00:12:04.421 03:55:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:04.421 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:04.421 03:55:39 -- nvmf/common.sh@162 -- # true 00:12:04.421 03:55:39 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:04.421 03:55:39 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:04.421 03:55:39 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:04.421 03:55:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:04.421 03:55:39 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:04.421 03:55:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:04.421 03:55:39 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:04.421 03:55:39 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:04.421 03:55:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:04.421 03:55:39 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:04.421 03:55:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:04.421 03:55:39 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:04.421 03:55:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:04.679 03:55:39 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:04.679 03:55:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:04.679 03:55:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:04.679 03:55:39 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:04.679 03:55:39 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:04.679 03:55:39 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:04.679 03:55:39 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:04.679 03:55:39 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:04.679 03:55:39 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:04.679 03:55:39 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:04.679 03:55:39 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:04.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:04.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:12:04.679 00:12:04.679 --- 10.0.0.2 ping statistics --- 00:12:04.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.679 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:12:04.679 03:55:39 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:04.679 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:04.679 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:12:04.679 00:12:04.679 --- 10.0.0.3 ping statistics --- 00:12:04.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.679 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:12:04.679 03:55:39 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:04.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:04.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:12:04.679 00:12:04.679 --- 10.0.0.1 ping statistics --- 00:12:04.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.679 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:12:04.679 03:55:39 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:04.679 03:55:39 -- nvmf/common.sh@421 -- # return 0 00:12:04.679 03:55:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:04.679 03:55:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:04.679 03:55:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:04.679 03:55:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:04.679 03:55:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:04.679 03:55:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:04.679 03:55:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:04.679 03:55:39 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:04.679 03:55:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:04.679 03:55:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:04.679 03:55:39 -- common/autotest_common.sh@10 -- # set +x 00:12:04.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.679 03:55:39 -- nvmf/common.sh@469 -- # nvmfpid=66635 00:12:04.679 03:55:39 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:04.679 03:55:39 -- nvmf/common.sh@470 -- # waitforlisten 66635 00:12:04.679 03:55:39 -- common/autotest_common.sh@829 -- # '[' -z 66635 ']' 00:12:04.679 03:55:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.679 03:55:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:04.679 03:55:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.679 03:55:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:04.679 03:55:39 -- common/autotest_common.sh@10 -- # set +x 00:12:04.679 [2024-11-08 03:55:39.701105] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:04.679 [2024-11-08 03:55:39.701205] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:04.938 [2024-11-08 03:55:39.840024] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:04.938 [2024-11-08 03:55:39.920920] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:04.938 [2024-11-08 03:55:39.921083] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:04.938 [2024-11-08 03:55:39.921097] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:04.938 [2024-11-08 03:55:39.921104] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:04.938 [2024-11-08 03:55:39.921251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.938 [2024-11-08 03:55:39.921628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:04.938 [2024-11-08 03:55:39.922459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:04.938 [2024-11-08 03:55:39.922466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.873 03:55:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:05.873 03:55:40 -- common/autotest_common.sh@862 -- # return 0 00:12:05.873 03:55:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:05.873 03:55:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:05.873 03:55:40 -- common/autotest_common.sh@10 -- # set +x 00:12:05.873 03:55:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:05.873 03:55:40 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:05.873 03:55:40 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode25991 00:12:05.873 [2024-11-08 03:55:40.967304] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:06.131 03:55:40 -- target/invalid.sh@40 -- # out='2024/11/08 03:55:40 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode25991 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:06.131 request: 00:12:06.131 { 00:12:06.131 "method": "nvmf_create_subsystem", 00:12:06.131 "params": { 00:12:06.131 "nqn": "nqn.2016-06.io.spdk:cnode25991", 00:12:06.132 "tgt_name": "foobar" 00:12:06.132 } 00:12:06.132 } 00:12:06.132 Got JSON-RPC error response 00:12:06.132 GoRPCClient: error on JSON-RPC call' 00:12:06.132 03:55:40 -- target/invalid.sh@41 -- # [[ 2024/11/08 03:55:40 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode25991 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:06.132 request: 00:12:06.132 { 00:12:06.132 "method": "nvmf_create_subsystem", 00:12:06.132 "params": { 00:12:06.132 "nqn": "nqn.2016-06.io.spdk:cnode25991", 00:12:06.132 "tgt_name": "foobar" 00:12:06.132 } 00:12:06.132 } 00:12:06.132 Got JSON-RPC error response 00:12:06.132 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:06.132 03:55:40 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:06.132 03:55:40 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode22415 00:12:06.390 [2024-11-08 03:55:41.280035] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22415: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:06.390 03:55:41 -- target/invalid.sh@45 -- # out='2024/11/08 03:55:41 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode22415 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:06.390 request: 00:12:06.390 { 00:12:06.390 "method": "nvmf_create_subsystem", 00:12:06.390 "params": { 00:12:06.390 "nqn": "nqn.2016-06.io.spdk:cnode22415", 00:12:06.390 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:06.390 } 00:12:06.390 } 00:12:06.390 Got JSON-RPC error response 00:12:06.390 GoRPCClient: error on JSON-RPC call' 00:12:06.390 03:55:41 -- target/invalid.sh@46 -- # [[ 2024/11/08 03:55:41 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode22415 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:06.390 request: 00:12:06.390 { 00:12:06.390 "method": "nvmf_create_subsystem", 00:12:06.390 "params": { 00:12:06.390 "nqn": "nqn.2016-06.io.spdk:cnode22415", 00:12:06.390 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:06.390 } 00:12:06.390 } 00:12:06.390 Got JSON-RPC error response 00:12:06.390 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:06.390 03:55:41 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:06.390 03:55:41 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode1236 00:12:06.648 [2024-11-08 03:55:41.588529] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1236: invalid model number 'SPDK_Controller' 00:12:06.648 03:55:41 -- target/invalid.sh@50 -- # out='2024/11/08 03:55:41 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode1236], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:06.648 request: 00:12:06.648 { 00:12:06.648 "method": "nvmf_create_subsystem", 00:12:06.648 "params": { 00:12:06.648 "nqn": "nqn.2016-06.io.spdk:cnode1236", 00:12:06.648 "model_number": "SPDK_Controller\u001f" 00:12:06.648 } 00:12:06.648 } 00:12:06.648 Got JSON-RPC error response 00:12:06.648 GoRPCClient: error on JSON-RPC call' 00:12:06.648 03:55:41 -- target/invalid.sh@51 -- # [[ 2024/11/08 03:55:41 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode1236], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:06.648 request: 00:12:06.648 { 00:12:06.648 "method": "nvmf_create_subsystem", 00:12:06.648 "params": { 00:12:06.648 "nqn": "nqn.2016-06.io.spdk:cnode1236", 00:12:06.648 "model_number": "SPDK_Controller\u001f" 00:12:06.648 } 00:12:06.648 } 00:12:06.648 Got JSON-RPC error response 00:12:06.648 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:06.648 03:55:41 -- target/invalid.sh@54 -- # gen_random_s 21 00:12:06.648 03:55:41 -- target/invalid.sh@19 -- # local length=21 ll 00:12:06.648 03:55:41 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:06.648 03:55:41 -- target/invalid.sh@21 -- # local chars 00:12:06.648 03:55:41 -- target/invalid.sh@22 -- # local string 00:12:06.648 03:55:41 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:06.648 03:55:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.648 03:55:41 -- target/invalid.sh@25 -- # printf %x 98 00:12:06.648 03:55:41 -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:06.648 03:55:41 -- target/invalid.sh@25 -- # string+=b 00:12:06.648 03:55:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.648 03:55:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.648 03:55:41 -- target/invalid.sh@25 -- # printf %x 102 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # string+=f 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # printf %x 86 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # string+=V 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # printf %x 118 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # string+=v 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # printf %x 71 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # string+=G 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # printf %x 127 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # string+=$'\177' 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # printf %x 47 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # string+=/ 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # printf %x 122 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # string+=z 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # printf %x 34 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # string+='"' 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # printf %x 78 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # string+=N 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # printf %x 63 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # string+='?' 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # printf %x 119 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # string+=w 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # printf %x 99 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # string+=c 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # printf %x 118 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # string+=v 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # printf %x 83 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # string+=S 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # printf %x 70 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # string+=F 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # printf %x 60 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # string+='<' 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # printf %x 93 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # string+=']' 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # printf %x 100 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # string+=d 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # printf %x 109 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # string+=m 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # printf %x 62 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:06.649 03:55:41 -- target/invalid.sh@25 -- # string+='>' 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.649 03:55:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.649 03:55:41 -- target/invalid.sh@28 -- # [[ b == \- ]] 00:12:06.649 03:55:41 -- target/invalid.sh@31 -- # echo 'bfVvG/z"N?wcvSF<]dm>' 00:12:06.649 03:55:41 -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'bfVvG/z"N?wcvSF<]dm>' nqn.2016-06.io.spdk:cnode12562 00:12:07.215 [2024-11-08 03:55:42.025085] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12562: invalid serial number 'bfVvG/z"N?wcvSF<]dm>' 00:12:07.215 03:55:42 -- target/invalid.sh@54 -- # out='2024/11/08 03:55:42 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode12562 serial_number:bfVvG/z"N?wcvSF<]dm>], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN bfVvG/z"N?wcvSF<]dm> 00:12:07.215 request: 00:12:07.215 { 00:12:07.215 "method": "nvmf_create_subsystem", 00:12:07.215 "params": { 00:12:07.215 "nqn": "nqn.2016-06.io.spdk:cnode12562", 00:12:07.215 "serial_number": "bfVvG\u007f/z\"N?wcvSF<]dm>" 00:12:07.215 } 00:12:07.215 } 00:12:07.215 Got JSON-RPC error response 00:12:07.216 GoRPCClient: error on JSON-RPC call' 00:12:07.216 03:55:42 -- target/invalid.sh@55 -- # [[ 2024/11/08 03:55:42 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode12562 serial_number:bfVvG/z"N?wcvSF<]dm>], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN bfVvG/z"N?wcvSF<]dm> 00:12:07.216 request: 00:12:07.216 { 00:12:07.216 "method": "nvmf_create_subsystem", 00:12:07.216 "params": { 00:12:07.216 "nqn": "nqn.2016-06.io.spdk:cnode12562", 00:12:07.216 "serial_number": "bfVvG\u007f/z\"N?wcvSF<]dm>" 00:12:07.216 } 00:12:07.216 } 00:12:07.216 Got JSON-RPC error response 00:12:07.216 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:07.216 03:55:42 -- target/invalid.sh@58 -- # gen_random_s 41 00:12:07.216 03:55:42 -- target/invalid.sh@19 -- # local length=41 ll 00:12:07.216 03:55:42 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:07.216 03:55:42 -- target/invalid.sh@21 -- # local chars 00:12:07.216 03:55:42 -- target/invalid.sh@22 -- # local string 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # printf %x 57 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # string+=9 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # printf %x 46 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # string+=. 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # printf %x 119 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # string+=w 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # printf %x 106 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # string+=j 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # printf %x 77 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # string+=M 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # printf %x 36 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # string+='$' 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # printf %x 92 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # string+='\' 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # printf %x 95 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # string+=_ 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # printf %x 60 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # string+='<' 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # printf %x 90 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # string+=Z 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # printf %x 66 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # string+=B 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # printf %x 34 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # string+='"' 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # printf %x 35 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # string+='#' 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # printf %x 122 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # string+=z 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # printf %x 42 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # string+='*' 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # printf %x 83 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # string+=S 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # printf %x 83 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # string+=S 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # printf %x 53 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # string+=5 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # printf %x 36 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # string+='$' 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # printf %x 110 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # string+=n 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # printf %x 44 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # string+=, 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # printf %x 51 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # string+=3 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # printf %x 107 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # string+=k 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # printf %x 76 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # string+=L 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # printf %x 93 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # string+=']' 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # printf %x 62 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # string+='>' 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # printf %x 63 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # string+='?' 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.216 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.216 03:55:42 -- target/invalid.sh@25 -- # printf %x 59 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # string+=';' 00:12:07.217 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.217 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # printf %x 49 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # string+=1 00:12:07.217 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.217 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # printf %x 74 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # string+=J 00:12:07.217 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.217 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # printf %x 127 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # string+=$'\177' 00:12:07.217 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.217 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # printf %x 33 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # string+='!' 00:12:07.217 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.217 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # printf %x 78 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # string+=N 00:12:07.217 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.217 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # printf %x 81 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # string+=Q 00:12:07.217 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.217 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # printf %x 62 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # string+='>' 00:12:07.217 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.217 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # printf %x 126 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # string+='~' 00:12:07.217 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.217 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # printf %x 115 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # string+=s 00:12:07.217 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.217 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # printf %x 70 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # string+=F 00:12:07.217 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.217 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # printf %x 85 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # string+=U 00:12:07.217 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.217 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # printf %x 33 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # string+='!' 00:12:07.217 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.217 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # printf %x 46 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:07.217 03:55:42 -- target/invalid.sh@25 -- # string+=. 00:12:07.217 03:55:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.217 03:55:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.217 03:55:42 -- target/invalid.sh@28 -- # [[ 9 == \- ]] 00:12:07.217 03:55:42 -- target/invalid.sh@31 -- # echo '9.wjM$\_?;1J!NQ>~sFU!.' 00:12:07.217 03:55:42 -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d '9.wjM$\_?;1J!NQ>~sFU!.' nqn.2016-06.io.spdk:cnode2022 00:12:07.475 [2024-11-08 03:55:42.473640] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2022: invalid model number '9.wjM$\_?;1J!NQ>~sFU!.' 00:12:07.475 03:55:42 -- target/invalid.sh@58 -- # out='2024/11/08 03:55:42 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:9.wjM$\_?;1J!NQ>~sFU!. nqn:nqn.2016-06.io.spdk:cnode2022], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN 9.wjM$\_?;1J!NQ>~sFU!. 00:12:07.475 request: 00:12:07.475 { 00:12:07.475 "method": "nvmf_create_subsystem", 00:12:07.475 "params": { 00:12:07.475 "nqn": "nqn.2016-06.io.spdk:cnode2022", 00:12:07.475 "model_number": "9.wjM$\\_?;1J\u007f!NQ>~sFU!." 00:12:07.475 } 00:12:07.475 } 00:12:07.475 Got JSON-RPC error response 00:12:07.475 GoRPCClient: error on JSON-RPC call' 00:12:07.475 03:55:42 -- target/invalid.sh@59 -- # [[ 2024/11/08 03:55:42 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:9.wjM$\_?;1J!NQ>~sFU!. nqn:nqn.2016-06.io.spdk:cnode2022], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN 9.wjM$\_?;1J!NQ>~sFU!. 00:12:07.475 request: 00:12:07.475 { 00:12:07.475 "method": "nvmf_create_subsystem", 00:12:07.475 "params": { 00:12:07.475 "nqn": "nqn.2016-06.io.spdk:cnode2022", 00:12:07.475 "model_number": "9.wjM$\\_?;1J\u007f!NQ>~sFU!." 00:12:07.475 } 00:12:07.475 } 00:12:07.475 Got JSON-RPC error response 00:12:07.475 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:07.475 03:55:42 -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:07.732 [2024-11-08 03:55:42.725910] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:07.732 03:55:42 -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:07.990 03:55:43 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:07.990 03:55:43 -- target/invalid.sh@67 -- # echo '' 00:12:07.990 03:55:43 -- target/invalid.sh@67 -- # head -n 1 00:12:07.990 03:55:43 -- target/invalid.sh@67 -- # IP= 00:12:07.990 03:55:43 -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:08.248 [2024-11-08 03:55:43.251496] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:08.248 03:55:43 -- target/invalid.sh@69 -- # out='2024/11/08 03:55:43 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:12:08.248 request: 00:12:08.248 { 00:12:08.248 "method": "nvmf_subsystem_remove_listener", 00:12:08.248 "params": { 00:12:08.248 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:08.248 "listen_address": { 00:12:08.248 "trtype": "tcp", 00:12:08.248 "traddr": "", 00:12:08.248 "trsvcid": "4421" 00:12:08.248 } 00:12:08.248 } 00:12:08.248 } 00:12:08.248 Got JSON-RPC error response 00:12:08.248 GoRPCClient: error on JSON-RPC call' 00:12:08.248 03:55:43 -- target/invalid.sh@70 -- # [[ 2024/11/08 03:55:43 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:12:08.248 request: 00:12:08.248 { 00:12:08.248 "method": "nvmf_subsystem_remove_listener", 00:12:08.248 "params": { 00:12:08.248 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:08.248 "listen_address": { 00:12:08.248 "trtype": "tcp", 00:12:08.248 "traddr": "", 00:12:08.248 "trsvcid": "4421" 00:12:08.248 } 00:12:08.248 } 00:12:08.248 } 00:12:08.248 Got JSON-RPC error response 00:12:08.248 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:08.248 03:55:43 -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28592 -i 0 00:12:08.513 [2024-11-08 03:55:43.503766] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28592: invalid cntlid range [0-65519] 00:12:08.513 03:55:43 -- target/invalid.sh@73 -- # out='2024/11/08 03:55:43 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode28592], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:12:08.513 request: 00:12:08.513 { 00:12:08.513 "method": "nvmf_create_subsystem", 00:12:08.513 "params": { 00:12:08.513 "nqn": "nqn.2016-06.io.spdk:cnode28592", 00:12:08.513 "min_cntlid": 0 00:12:08.513 } 00:12:08.513 } 00:12:08.513 Got JSON-RPC error response 00:12:08.513 GoRPCClient: error on JSON-RPC call' 00:12:08.513 03:55:43 -- target/invalid.sh@74 -- # [[ 2024/11/08 03:55:43 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode28592], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:12:08.513 request: 00:12:08.513 { 00:12:08.513 "method": "nvmf_create_subsystem", 00:12:08.513 "params": { 00:12:08.513 "nqn": "nqn.2016-06.io.spdk:cnode28592", 00:12:08.513 "min_cntlid": 0 00:12:08.513 } 00:12:08.513 } 00:12:08.513 Got JSON-RPC error response 00:12:08.513 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:08.513 03:55:43 -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4281 -i 65520 00:12:08.806 [2024-11-08 03:55:43.757728] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4281: invalid cntlid range [65520-65519] 00:12:08.806 03:55:43 -- target/invalid.sh@75 -- # out='2024/11/08 03:55:43 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode4281], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:12:08.806 request: 00:12:08.806 { 00:12:08.806 "method": "nvmf_create_subsystem", 00:12:08.806 "params": { 00:12:08.806 "nqn": "nqn.2016-06.io.spdk:cnode4281", 00:12:08.806 "min_cntlid": 65520 00:12:08.806 } 00:12:08.806 } 00:12:08.806 Got JSON-RPC error response 00:12:08.806 GoRPCClient: error on JSON-RPC call' 00:12:08.806 03:55:43 -- target/invalid.sh@76 -- # [[ 2024/11/08 03:55:43 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode4281], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:12:08.806 request: 00:12:08.806 { 00:12:08.806 "method": "nvmf_create_subsystem", 00:12:08.806 "params": { 00:12:08.806 "nqn": "nqn.2016-06.io.spdk:cnode4281", 00:12:08.806 "min_cntlid": 65520 00:12:08.806 } 00:12:08.806 } 00:12:08.806 Got JSON-RPC error response 00:12:08.806 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:08.806 03:55:43 -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25842 -I 0 00:12:09.064 [2024-11-08 03:55:44.106236] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25842: invalid cntlid range [1-0] 00:12:09.064 03:55:44 -- target/invalid.sh@77 -- # out='2024/11/08 03:55:44 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode25842], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:12:09.064 request: 00:12:09.064 { 00:12:09.064 "method": "nvmf_create_subsystem", 00:12:09.064 "params": { 00:12:09.064 "nqn": "nqn.2016-06.io.spdk:cnode25842", 00:12:09.064 "max_cntlid": 0 00:12:09.064 } 00:12:09.064 } 00:12:09.064 Got JSON-RPC error response 00:12:09.064 GoRPCClient: error on JSON-RPC call' 00:12:09.064 03:55:44 -- target/invalid.sh@78 -- # [[ 2024/11/08 03:55:44 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode25842], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:12:09.064 request: 00:12:09.064 { 00:12:09.064 "method": "nvmf_create_subsystem", 00:12:09.064 "params": { 00:12:09.064 "nqn": "nqn.2016-06.io.spdk:cnode25842", 00:12:09.064 "max_cntlid": 0 00:12:09.064 } 00:12:09.064 } 00:12:09.064 Got JSON-RPC error response 00:12:09.064 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:09.064 03:55:44 -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23759 -I 65520 00:12:09.322 [2024-11-08 03:55:44.398664] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23759: invalid cntlid range [1-65520] 00:12:09.322 03:55:44 -- target/invalid.sh@79 -- # out='2024/11/08 03:55:44 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode23759], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:12:09.322 request: 00:12:09.322 { 00:12:09.322 "method": "nvmf_create_subsystem", 00:12:09.322 "params": { 00:12:09.322 "nqn": "nqn.2016-06.io.spdk:cnode23759", 00:12:09.322 "max_cntlid": 65520 00:12:09.322 } 00:12:09.322 } 00:12:09.322 Got JSON-RPC error response 00:12:09.322 GoRPCClient: error on JSON-RPC call' 00:12:09.322 03:55:44 -- target/invalid.sh@80 -- # [[ 2024/11/08 03:55:44 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode23759], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:12:09.322 request: 00:12:09.322 { 00:12:09.322 "method": "nvmf_create_subsystem", 00:12:09.322 "params": { 00:12:09.322 "nqn": "nqn.2016-06.io.spdk:cnode23759", 00:12:09.322 "max_cntlid": 65520 00:12:09.322 } 00:12:09.322 } 00:12:09.322 Got JSON-RPC error response 00:12:09.322 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:09.322 03:55:44 -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4665 -i 6 -I 5 00:12:09.888 [2024-11-08 03:55:44.739126] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4665: invalid cntlid range [6-5] 00:12:09.888 03:55:44 -- target/invalid.sh@83 -- # out='2024/11/08 03:55:44 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode4665], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:12:09.888 request: 00:12:09.888 { 00:12:09.888 "method": "nvmf_create_subsystem", 00:12:09.888 "params": { 00:12:09.888 "nqn": "nqn.2016-06.io.spdk:cnode4665", 00:12:09.888 "min_cntlid": 6, 00:12:09.888 "max_cntlid": 5 00:12:09.888 } 00:12:09.888 } 00:12:09.888 Got JSON-RPC error response 00:12:09.888 GoRPCClient: error on JSON-RPC call' 00:12:09.888 03:55:44 -- target/invalid.sh@84 -- # [[ 2024/11/08 03:55:44 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode4665], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:12:09.888 request: 00:12:09.888 { 00:12:09.888 "method": "nvmf_create_subsystem", 00:12:09.888 "params": { 00:12:09.888 "nqn": "nqn.2016-06.io.spdk:cnode4665", 00:12:09.888 "min_cntlid": 6, 00:12:09.888 "max_cntlid": 5 00:12:09.888 } 00:12:09.888 } 00:12:09.888 Got JSON-RPC error response 00:12:09.888 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:09.888 03:55:44 -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:09.888 03:55:44 -- target/invalid.sh@87 -- # out='request: 00:12:09.888 { 00:12:09.888 "name": "foobar", 00:12:09.888 "method": "nvmf_delete_target", 00:12:09.888 "req_id": 1 00:12:09.888 } 00:12:09.888 Got JSON-RPC error response 00:12:09.888 response: 00:12:09.888 { 00:12:09.888 "code": -32602, 00:12:09.888 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:09.888 }' 00:12:09.888 03:55:44 -- target/invalid.sh@88 -- # [[ request: 00:12:09.888 { 00:12:09.888 "name": "foobar", 00:12:09.888 "method": "nvmf_delete_target", 00:12:09.888 "req_id": 1 00:12:09.888 } 00:12:09.888 Got JSON-RPC error response 00:12:09.888 response: 00:12:09.888 { 00:12:09.888 "code": -32602, 00:12:09.888 "message": "The specified target doesn't exist, cannot delete it." 00:12:09.888 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:09.888 03:55:44 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:09.888 03:55:44 -- target/invalid.sh@91 -- # nvmftestfini 00:12:09.888 03:55:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:09.888 03:55:44 -- nvmf/common.sh@116 -- # sync 00:12:09.888 03:55:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:09.888 03:55:44 -- nvmf/common.sh@119 -- # set +e 00:12:09.888 03:55:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:09.888 03:55:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:09.888 rmmod nvme_tcp 00:12:09.888 rmmod nvme_fabrics 00:12:09.888 rmmod nvme_keyring 00:12:10.145 03:55:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:10.145 03:55:45 -- nvmf/common.sh@123 -- # set -e 00:12:10.145 03:55:45 -- nvmf/common.sh@124 -- # return 0 00:12:10.145 03:55:45 -- nvmf/common.sh@477 -- # '[' -n 66635 ']' 00:12:10.145 03:55:45 -- nvmf/common.sh@478 -- # killprocess 66635 00:12:10.145 03:55:45 -- common/autotest_common.sh@936 -- # '[' -z 66635 ']' 00:12:10.145 03:55:45 -- common/autotest_common.sh@940 -- # kill -0 66635 00:12:10.145 03:55:45 -- common/autotest_common.sh@941 -- # uname 00:12:10.145 03:55:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:10.145 03:55:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66635 00:12:10.145 03:55:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:10.145 killing process with pid 66635 00:12:10.145 03:55:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:10.145 03:55:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66635' 00:12:10.145 03:55:45 -- common/autotest_common.sh@955 -- # kill 66635 00:12:10.145 03:55:45 -- common/autotest_common.sh@960 -- # wait 66635 00:12:10.403 03:55:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:10.403 03:55:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:10.403 03:55:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:10.403 03:55:45 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:10.403 03:55:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:10.403 03:55:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.403 03:55:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:10.403 03:55:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.403 03:55:45 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:10.403 00:12:10.403 real 0m6.413s 00:12:10.403 user 0m25.313s 00:12:10.403 sys 0m1.318s 00:12:10.403 03:55:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:10.403 03:55:45 -- common/autotest_common.sh@10 -- # set +x 00:12:10.403 ************************************ 00:12:10.403 END TEST nvmf_invalid 00:12:10.403 ************************************ 00:12:10.661 03:55:45 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:10.661 03:55:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:10.661 03:55:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:10.661 03:55:45 -- common/autotest_common.sh@10 -- # set +x 00:12:10.661 ************************************ 00:12:10.661 START TEST nvmf_abort 00:12:10.661 ************************************ 00:12:10.661 03:55:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:10.661 * Looking for test storage... 00:12:10.661 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:10.661 03:55:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:10.661 03:55:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:10.661 03:55:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:10.661 03:55:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:10.661 03:55:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:10.661 03:55:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:10.661 03:55:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:10.661 03:55:45 -- scripts/common.sh@335 -- # IFS=.-: 00:12:10.661 03:55:45 -- scripts/common.sh@335 -- # read -ra ver1 00:12:10.661 03:55:45 -- scripts/common.sh@336 -- # IFS=.-: 00:12:10.661 03:55:45 -- scripts/common.sh@336 -- # read -ra ver2 00:12:10.661 03:55:45 -- scripts/common.sh@337 -- # local 'op=<' 00:12:10.661 03:55:45 -- scripts/common.sh@339 -- # ver1_l=2 00:12:10.661 03:55:45 -- scripts/common.sh@340 -- # ver2_l=1 00:12:10.661 03:55:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:10.661 03:55:45 -- scripts/common.sh@343 -- # case "$op" in 00:12:10.661 03:55:45 -- scripts/common.sh@344 -- # : 1 00:12:10.661 03:55:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:10.661 03:55:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:10.661 03:55:45 -- scripts/common.sh@364 -- # decimal 1 00:12:10.661 03:55:45 -- scripts/common.sh@352 -- # local d=1 00:12:10.661 03:55:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:10.661 03:55:45 -- scripts/common.sh@354 -- # echo 1 00:12:10.661 03:55:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:10.661 03:55:45 -- scripts/common.sh@365 -- # decimal 2 00:12:10.661 03:55:45 -- scripts/common.sh@352 -- # local d=2 00:12:10.661 03:55:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:10.661 03:55:45 -- scripts/common.sh@354 -- # echo 2 00:12:10.661 03:55:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:10.661 03:55:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:10.661 03:55:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:10.661 03:55:45 -- scripts/common.sh@367 -- # return 0 00:12:10.661 03:55:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:10.661 03:55:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:10.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.661 --rc genhtml_branch_coverage=1 00:12:10.661 --rc genhtml_function_coverage=1 00:12:10.661 --rc genhtml_legend=1 00:12:10.661 --rc geninfo_all_blocks=1 00:12:10.661 --rc geninfo_unexecuted_blocks=1 00:12:10.661 00:12:10.661 ' 00:12:10.661 03:55:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:10.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.661 --rc genhtml_branch_coverage=1 00:12:10.661 --rc genhtml_function_coverage=1 00:12:10.661 --rc genhtml_legend=1 00:12:10.661 --rc geninfo_all_blocks=1 00:12:10.661 --rc geninfo_unexecuted_blocks=1 00:12:10.661 00:12:10.661 ' 00:12:10.661 03:55:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:10.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.661 --rc genhtml_branch_coverage=1 00:12:10.661 --rc genhtml_function_coverage=1 00:12:10.661 --rc genhtml_legend=1 00:12:10.661 --rc geninfo_all_blocks=1 00:12:10.661 --rc geninfo_unexecuted_blocks=1 00:12:10.661 00:12:10.661 ' 00:12:10.661 03:55:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:10.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.661 --rc genhtml_branch_coverage=1 00:12:10.661 --rc genhtml_function_coverage=1 00:12:10.661 --rc genhtml_legend=1 00:12:10.661 --rc geninfo_all_blocks=1 00:12:10.661 --rc geninfo_unexecuted_blocks=1 00:12:10.661 00:12:10.661 ' 00:12:10.661 03:55:45 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:10.661 03:55:45 -- nvmf/common.sh@7 -- # uname -s 00:12:10.661 03:55:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:10.661 03:55:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:10.661 03:55:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:10.661 03:55:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:10.661 03:55:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:10.661 03:55:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:10.661 03:55:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:10.661 03:55:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:10.661 03:55:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:10.661 03:55:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:10.661 03:55:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:12:10.661 03:55:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:12:10.661 03:55:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:10.661 03:55:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:10.661 03:55:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:10.661 03:55:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:10.661 03:55:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.661 03:55:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.661 03:55:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.661 03:55:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.662 03:55:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.662 03:55:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.662 03:55:45 -- paths/export.sh@5 -- # export PATH 00:12:10.662 03:55:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.662 03:55:45 -- nvmf/common.sh@46 -- # : 0 00:12:10.662 03:55:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:10.662 03:55:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:10.662 03:55:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:10.662 03:55:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:10.662 03:55:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:10.662 03:55:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:10.662 03:55:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:10.662 03:55:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:10.662 03:55:45 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:10.662 03:55:45 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:10.662 03:55:45 -- target/abort.sh@14 -- # nvmftestinit 00:12:10.662 03:55:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:10.662 03:55:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:10.662 03:55:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:10.662 03:55:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:10.662 03:55:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:10.662 03:55:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.662 03:55:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:10.662 03:55:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.662 03:55:45 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:10.662 03:55:45 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:10.662 03:55:45 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:10.662 03:55:45 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:10.662 03:55:45 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:10.662 03:55:45 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:10.662 03:55:45 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:10.662 03:55:45 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:10.662 03:55:45 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:10.662 03:55:45 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:10.662 03:55:45 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:10.662 03:55:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:10.662 03:55:45 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:10.662 03:55:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:10.662 03:55:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:10.662 03:55:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:10.662 03:55:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:10.662 03:55:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:10.662 03:55:45 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:10.662 03:55:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:10.662 Cannot find device "nvmf_tgt_br" 00:12:10.662 03:55:45 -- nvmf/common.sh@154 -- # true 00:12:10.662 03:55:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:10.662 Cannot find device "nvmf_tgt_br2" 00:12:10.662 03:55:45 -- nvmf/common.sh@155 -- # true 00:12:10.662 03:55:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:10.662 03:55:45 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:10.662 Cannot find device "nvmf_tgt_br" 00:12:10.662 03:55:45 -- nvmf/common.sh@157 -- # true 00:12:10.662 03:55:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:10.662 Cannot find device "nvmf_tgt_br2" 00:12:10.662 03:55:45 -- nvmf/common.sh@158 -- # true 00:12:10.662 03:55:45 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:10.919 03:55:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:10.919 03:55:45 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:10.919 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:10.919 03:55:45 -- nvmf/common.sh@161 -- # true 00:12:10.919 03:55:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:10.919 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:10.919 03:55:45 -- nvmf/common.sh@162 -- # true 00:12:10.919 03:55:45 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:10.919 03:55:45 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:10.919 03:55:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:10.919 03:55:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:10.919 03:55:45 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:10.919 03:55:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:10.919 03:55:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:10.919 03:55:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:10.919 03:55:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:10.919 03:55:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:10.919 03:55:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:10.919 03:55:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:10.919 03:55:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:10.919 03:55:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:10.919 03:55:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:10.919 03:55:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:10.919 03:55:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:10.919 03:55:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:10.919 03:55:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:10.919 03:55:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:10.919 03:55:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:10.919 03:55:46 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:10.919 03:55:46 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:10.919 03:55:46 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:10.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:10.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:12:10.919 00:12:10.919 --- 10.0.0.2 ping statistics --- 00:12:10.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.919 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:12:10.919 03:55:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:10.919 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:10.919 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:12:10.919 00:12:10.919 --- 10.0.0.3 ping statistics --- 00:12:10.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.919 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:12:10.919 03:55:46 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:10.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:10.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:12:10.919 00:12:10.919 --- 10.0.0.1 ping statistics --- 00:12:10.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.920 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:12:10.920 03:55:46 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:10.920 03:55:46 -- nvmf/common.sh@421 -- # return 0 00:12:10.920 03:55:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:10.920 03:55:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:10.920 03:55:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:10.920 03:55:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:10.920 03:55:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:10.920 03:55:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:10.920 03:55:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:11.177 03:55:46 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:11.177 03:55:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:11.177 03:55:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:11.177 03:55:46 -- common/autotest_common.sh@10 -- # set +x 00:12:11.177 03:55:46 -- nvmf/common.sh@469 -- # nvmfpid=67158 00:12:11.177 03:55:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:11.177 03:55:46 -- nvmf/common.sh@470 -- # waitforlisten 67158 00:12:11.177 03:55:46 -- common/autotest_common.sh@829 -- # '[' -z 67158 ']' 00:12:11.177 03:55:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.177 03:55:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:11.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.177 03:55:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.177 03:55:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:11.177 03:55:46 -- common/autotest_common.sh@10 -- # set +x 00:12:11.177 [2024-11-08 03:55:46.107024] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:11.177 [2024-11-08 03:55:46.107141] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.177 [2024-11-08 03:55:46.251936] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:11.436 [2024-11-08 03:55:46.399098] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:11.436 [2024-11-08 03:55:46.399303] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:11.436 [2024-11-08 03:55:46.399323] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:11.436 [2024-11-08 03:55:46.399337] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:11.436 [2024-11-08 03:55:46.399506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:11.436 [2024-11-08 03:55:46.400095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:11.436 [2024-11-08 03:55:46.400159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.002 03:55:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:12.002 03:55:47 -- common/autotest_common.sh@862 -- # return 0 00:12:12.002 03:55:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:12.002 03:55:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:12.002 03:55:47 -- common/autotest_common.sh@10 -- # set +x 00:12:12.261 03:55:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:12.261 03:55:47 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:12.261 03:55:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.261 03:55:47 -- common/autotest_common.sh@10 -- # set +x 00:12:12.261 [2024-11-08 03:55:47.126288] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:12.261 03:55:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.261 03:55:47 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:12.261 03:55:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.261 03:55:47 -- common/autotest_common.sh@10 -- # set +x 00:12:12.261 Malloc0 00:12:12.261 03:55:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.261 03:55:47 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:12.261 03:55:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.261 03:55:47 -- common/autotest_common.sh@10 -- # set +x 00:12:12.261 Delay0 00:12:12.261 03:55:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.261 03:55:47 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:12.261 03:55:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.261 03:55:47 -- common/autotest_common.sh@10 -- # set +x 00:12:12.261 03:55:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.261 03:55:47 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:12.261 03:55:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.261 03:55:47 -- common/autotest_common.sh@10 -- # set +x 00:12:12.261 03:55:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.261 03:55:47 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:12.261 03:55:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.261 03:55:47 -- common/autotest_common.sh@10 -- # set +x 00:12:12.261 [2024-11-08 03:55:47.209616] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:12.261 03:55:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.261 03:55:47 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:12.261 03:55:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.261 03:55:47 -- common/autotest_common.sh@10 -- # set +x 00:12:12.261 03:55:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.261 03:55:47 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:12.519 [2024-11-08 03:55:47.384302] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:14.424 Initializing NVMe Controllers 00:12:14.424 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:14.424 controller IO queue size 128 less than required 00:12:14.424 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:14.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:14.424 Initialization complete. Launching workers. 00:12:14.424 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36680 00:12:14.424 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36741, failed to submit 62 00:12:14.424 success 36680, unsuccess 61, failed 0 00:12:14.424 03:55:49 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:14.424 03:55:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.424 03:55:49 -- common/autotest_common.sh@10 -- # set +x 00:12:14.424 03:55:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.424 03:55:49 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:14.424 03:55:49 -- target/abort.sh@38 -- # nvmftestfini 00:12:14.424 03:55:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:14.424 03:55:49 -- nvmf/common.sh@116 -- # sync 00:12:14.424 03:55:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:14.424 03:55:49 -- nvmf/common.sh@119 -- # set +e 00:12:14.424 03:55:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:14.424 03:55:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:14.424 rmmod nvme_tcp 00:12:14.424 rmmod nvme_fabrics 00:12:14.424 rmmod nvme_keyring 00:12:14.424 03:55:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:14.424 03:55:49 -- nvmf/common.sh@123 -- # set -e 00:12:14.424 03:55:49 -- nvmf/common.sh@124 -- # return 0 00:12:14.424 03:55:49 -- nvmf/common.sh@477 -- # '[' -n 67158 ']' 00:12:14.424 03:55:49 -- nvmf/common.sh@478 -- # killprocess 67158 00:12:14.424 03:55:49 -- common/autotest_common.sh@936 -- # '[' -z 67158 ']' 00:12:14.424 03:55:49 -- common/autotest_common.sh@940 -- # kill -0 67158 00:12:14.424 03:55:49 -- common/autotest_common.sh@941 -- # uname 00:12:14.424 03:55:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:14.681 03:55:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67158 00:12:14.681 03:55:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:14.681 03:55:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:14.681 killing process with pid 67158 00:12:14.681 03:55:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67158' 00:12:14.681 03:55:49 -- common/autotest_common.sh@955 -- # kill 67158 00:12:14.681 03:55:49 -- common/autotest_common.sh@960 -- # wait 67158 00:12:14.938 03:55:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:14.938 03:55:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:14.938 03:55:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:14.938 03:55:49 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:14.938 03:55:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:14.938 03:55:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.938 03:55:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:14.938 03:55:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.938 03:55:49 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:14.938 00:12:14.938 real 0m4.422s 00:12:14.938 user 0m12.373s 00:12:14.938 sys 0m1.029s 00:12:14.938 03:55:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:14.938 03:55:49 -- common/autotest_common.sh@10 -- # set +x 00:12:14.938 ************************************ 00:12:14.938 END TEST nvmf_abort 00:12:14.938 ************************************ 00:12:14.938 03:55:49 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:14.938 03:55:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:14.938 03:55:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:14.938 03:55:49 -- common/autotest_common.sh@10 -- # set +x 00:12:14.938 ************************************ 00:12:14.938 START TEST nvmf_ns_hotplug_stress 00:12:14.938 ************************************ 00:12:14.938 03:55:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:14.938 * Looking for test storage... 00:12:14.938 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:14.938 03:55:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:14.938 03:55:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:15.196 03:55:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:15.196 03:55:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:15.196 03:55:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:15.196 03:55:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:15.196 03:55:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:15.196 03:55:50 -- scripts/common.sh@335 -- # IFS=.-: 00:12:15.196 03:55:50 -- scripts/common.sh@335 -- # read -ra ver1 00:12:15.196 03:55:50 -- scripts/common.sh@336 -- # IFS=.-: 00:12:15.196 03:55:50 -- scripts/common.sh@336 -- # read -ra ver2 00:12:15.196 03:55:50 -- scripts/common.sh@337 -- # local 'op=<' 00:12:15.196 03:55:50 -- scripts/common.sh@339 -- # ver1_l=2 00:12:15.196 03:55:50 -- scripts/common.sh@340 -- # ver2_l=1 00:12:15.196 03:55:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:15.196 03:55:50 -- scripts/common.sh@343 -- # case "$op" in 00:12:15.196 03:55:50 -- scripts/common.sh@344 -- # : 1 00:12:15.196 03:55:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:15.196 03:55:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:15.196 03:55:50 -- scripts/common.sh@364 -- # decimal 1 00:12:15.196 03:55:50 -- scripts/common.sh@352 -- # local d=1 00:12:15.196 03:55:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:15.196 03:55:50 -- scripts/common.sh@354 -- # echo 1 00:12:15.196 03:55:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:15.196 03:55:50 -- scripts/common.sh@365 -- # decimal 2 00:12:15.196 03:55:50 -- scripts/common.sh@352 -- # local d=2 00:12:15.196 03:55:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:15.196 03:55:50 -- scripts/common.sh@354 -- # echo 2 00:12:15.196 03:55:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:15.196 03:55:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:15.196 03:55:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:15.196 03:55:50 -- scripts/common.sh@367 -- # return 0 00:12:15.196 03:55:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:15.196 03:55:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:15.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.196 --rc genhtml_branch_coverage=1 00:12:15.196 --rc genhtml_function_coverage=1 00:12:15.196 --rc genhtml_legend=1 00:12:15.196 --rc geninfo_all_blocks=1 00:12:15.196 --rc geninfo_unexecuted_blocks=1 00:12:15.196 00:12:15.196 ' 00:12:15.196 03:55:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:15.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.196 --rc genhtml_branch_coverage=1 00:12:15.196 --rc genhtml_function_coverage=1 00:12:15.196 --rc genhtml_legend=1 00:12:15.196 --rc geninfo_all_blocks=1 00:12:15.196 --rc geninfo_unexecuted_blocks=1 00:12:15.196 00:12:15.196 ' 00:12:15.196 03:55:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:15.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.196 --rc genhtml_branch_coverage=1 00:12:15.196 --rc genhtml_function_coverage=1 00:12:15.196 --rc genhtml_legend=1 00:12:15.196 --rc geninfo_all_blocks=1 00:12:15.196 --rc geninfo_unexecuted_blocks=1 00:12:15.196 00:12:15.196 ' 00:12:15.196 03:55:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:15.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.196 --rc genhtml_branch_coverage=1 00:12:15.196 --rc genhtml_function_coverage=1 00:12:15.196 --rc genhtml_legend=1 00:12:15.196 --rc geninfo_all_blocks=1 00:12:15.196 --rc geninfo_unexecuted_blocks=1 00:12:15.196 00:12:15.196 ' 00:12:15.196 03:55:50 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:15.196 03:55:50 -- nvmf/common.sh@7 -- # uname -s 00:12:15.196 03:55:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:15.196 03:55:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:15.196 03:55:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:15.196 03:55:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:15.196 03:55:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:15.196 03:55:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:15.196 03:55:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:15.196 03:55:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:15.196 03:55:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:15.196 03:55:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:15.196 03:55:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:12:15.196 03:55:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:12:15.196 03:55:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:15.196 03:55:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:15.196 03:55:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:15.196 03:55:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:15.196 03:55:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.196 03:55:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.196 03:55:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.196 03:55:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.196 03:55:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.196 03:55:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.196 03:55:50 -- paths/export.sh@5 -- # export PATH 00:12:15.196 03:55:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.196 03:55:50 -- nvmf/common.sh@46 -- # : 0 00:12:15.196 03:55:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:15.196 03:55:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:15.196 03:55:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:15.196 03:55:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:15.196 03:55:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:15.196 03:55:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:15.196 03:55:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:15.196 03:55:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:15.196 03:55:50 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:15.196 03:55:50 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:12:15.196 03:55:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:15.196 03:55:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:15.196 03:55:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:15.196 03:55:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:15.196 03:55:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:15.196 03:55:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.196 03:55:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:15.196 03:55:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.196 03:55:50 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:15.196 03:55:50 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:15.196 03:55:50 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:15.196 03:55:50 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:15.196 03:55:50 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:15.196 03:55:50 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:15.196 03:55:50 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:15.196 03:55:50 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:15.196 03:55:50 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:15.196 03:55:50 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:15.196 03:55:50 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:15.196 03:55:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:15.196 03:55:50 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:15.196 03:55:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:15.196 03:55:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:15.196 03:55:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:15.196 03:55:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:15.196 03:55:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:15.196 03:55:50 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:15.196 03:55:50 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:15.196 Cannot find device "nvmf_tgt_br" 00:12:15.196 03:55:50 -- nvmf/common.sh@154 -- # true 00:12:15.196 03:55:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:15.196 Cannot find device "nvmf_tgt_br2" 00:12:15.196 03:55:50 -- nvmf/common.sh@155 -- # true 00:12:15.196 03:55:50 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:15.196 03:55:50 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:15.196 Cannot find device "nvmf_tgt_br" 00:12:15.196 03:55:50 -- nvmf/common.sh@157 -- # true 00:12:15.196 03:55:50 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:15.196 Cannot find device "nvmf_tgt_br2" 00:12:15.196 03:55:50 -- nvmf/common.sh@158 -- # true 00:12:15.196 03:55:50 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:15.196 03:55:50 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:15.196 03:55:50 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:15.196 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:15.196 03:55:50 -- nvmf/common.sh@161 -- # true 00:12:15.196 03:55:50 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:15.196 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:15.196 03:55:50 -- nvmf/common.sh@162 -- # true 00:12:15.196 03:55:50 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:15.196 03:55:50 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:15.196 03:55:50 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:15.196 03:55:50 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:15.455 03:55:50 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:15.455 03:55:50 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:15.455 03:55:50 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:15.455 03:55:50 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:15.455 03:55:50 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:15.455 03:55:50 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:15.455 03:55:50 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:15.455 03:55:50 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:15.455 03:55:50 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:15.455 03:55:50 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:15.455 03:55:50 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:15.455 03:55:50 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:15.455 03:55:50 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:15.455 03:55:50 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:15.455 03:55:50 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:15.455 03:55:50 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:15.455 03:55:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:15.455 03:55:50 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:15.455 03:55:50 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:15.455 03:55:50 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:15.455 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:15.455 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:12:15.455 00:12:15.455 --- 10.0.0.2 ping statistics --- 00:12:15.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.455 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:12:15.455 03:55:50 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:15.455 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:15.455 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:12:15.455 00:12:15.455 --- 10.0.0.3 ping statistics --- 00:12:15.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.455 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:12:15.455 03:55:50 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:15.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:15.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:12:15.455 00:12:15.455 --- 10.0.0.1 ping statistics --- 00:12:15.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.455 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:12:15.455 03:55:50 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:15.455 03:55:50 -- nvmf/common.sh@421 -- # return 0 00:12:15.455 03:55:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:15.455 03:55:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:15.455 03:55:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:15.455 03:55:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:15.455 03:55:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:15.455 03:55:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:15.455 03:55:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:15.455 03:55:50 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:12:15.455 03:55:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:15.455 03:55:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:15.455 03:55:50 -- common/autotest_common.sh@10 -- # set +x 00:12:15.455 03:55:50 -- nvmf/common.sh@469 -- # nvmfpid=67421 00:12:15.455 03:55:50 -- nvmf/common.sh@470 -- # waitforlisten 67421 00:12:15.455 03:55:50 -- common/autotest_common.sh@829 -- # '[' -z 67421 ']' 00:12:15.455 03:55:50 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:15.455 03:55:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.455 03:55:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:15.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.455 03:55:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.455 03:55:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:15.455 03:55:50 -- common/autotest_common.sh@10 -- # set +x 00:12:15.455 [2024-11-08 03:55:50.529285] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:15.455 [2024-11-08 03:55:50.529398] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.714 [2024-11-08 03:55:50.656978] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:15.714 [2024-11-08 03:55:50.757897] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:15.714 [2024-11-08 03:55:50.758026] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.714 [2024-11-08 03:55:50.758038] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.714 [2024-11-08 03:55:50.758046] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.714 [2024-11-08 03:55:50.758214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.714 [2024-11-08 03:55:50.759324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:15.714 [2024-11-08 03:55:50.759381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.649 03:55:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:16.649 03:55:51 -- common/autotest_common.sh@862 -- # return 0 00:12:16.649 03:55:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:16.649 03:55:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:16.649 03:55:51 -- common/autotest_common.sh@10 -- # set +x 00:12:16.649 03:55:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:16.649 03:55:51 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:12:16.649 03:55:51 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:16.649 [2024-11-08 03:55:51.645146] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:16.649 03:55:51 -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:16.911 03:55:51 -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:17.173 [2024-11-08 03:55:52.091687] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:17.173 03:55:52 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:17.432 03:55:52 -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:17.692 Malloc0 00:12:17.692 03:55:52 -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:17.951 Delay0 00:12:17.951 03:55:52 -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:18.210 03:55:53 -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:18.210 NULL1 00:12:18.210 03:55:53 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:18.778 03:55:53 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=67553 00:12:18.778 03:55:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67553 00:12:18.778 03:55:53 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:18.778 03:55:53 -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:19.714 Read completed with error (sct=0, sc=11) 00:12:19.714 03:55:54 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:19.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:19.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:19.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:19.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:19.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:19.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:19.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:20.232 03:55:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:12:20.232 03:55:55 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:20.491 true 00:12:20.491 03:55:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67553 00:12:20.491 03:55:55 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:21.058 03:55:56 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:21.315 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:21.573 03:55:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:12:21.573 03:55:56 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:21.831 true 00:12:21.831 03:55:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67553 00:12:21.831 03:55:56 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:22.089 03:55:56 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:22.348 03:55:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:12:22.348 03:55:57 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:12:22.607 true 00:12:22.607 03:55:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67553 00:12:22.607 03:55:57 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:22.867 03:55:57 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:23.126 03:55:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:12:23.126 03:55:58 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:12:23.385 true 00:12:23.385 03:55:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67553 00:12:23.385 03:55:58 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:24.333 03:55:59 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:24.614 03:55:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:12:24.614 03:55:59 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:12:24.614 true 00:12:24.614 03:55:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67553 00:12:24.614 03:55:59 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:24.872 03:55:59 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:25.130 03:56:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:12:25.131 03:56:00 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:12:25.389 true 00:12:25.389 03:56:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67553 00:12:25.389 03:56:00 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:26.325 03:56:01 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:26.584 03:56:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:12:26.584 03:56:01 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:12:26.843 true 00:12:26.843 03:56:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67553 00:12:26.843 03:56:01 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:27.101 03:56:01 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:27.360 03:56:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:12:27.360 03:56:02 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:27.360 true 00:12:27.619 03:56:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67553 00:12:27.619 03:56:02 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:27.619 03:56:02 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:27.877 03:56:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:12:27.877 03:56:02 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:28.442 true 00:12:28.442 03:56:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67553 00:12:28.442 03:56:03 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.374 03:56:04 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:29.632 03:56:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:12:29.632 03:56:04 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:29.889 true 00:12:29.889 03:56:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67553 00:12:29.889 03:56:04 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:30.147 03:56:05 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:30.404 03:56:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:12:30.404 03:56:05 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:30.662 true 00:12:30.920 03:56:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67553 00:12:30.920 03:56:05 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:30.920 03:56:06 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:31.178 03:56:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:12:31.178 03:56:06 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:31.436 true 00:12:31.436 03:56:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67553 00:12:31.436 03:56:06 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:31.695 03:56:06 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:31.954 03:56:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:12:31.954 03:56:06 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:32.212 true 00:12:32.212 03:56:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67553 00:12:32.212 03:56:07 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:33.172 03:56:08 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:33.172 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:33.172 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:33.430 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:33.430 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:33.430 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:33.430 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:33.430 03:56:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:12:33.430 03:56:08 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:33.688 true 00:12:33.688 03:56:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67553 00:12:33.688 03:56:08 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.623 03:56:09 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:34.623 03:56:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:12:34.623 03:56:09 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:12:34.883 true 00:12:34.883 03:56:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67553 00:12:34.883 03:56:09 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:35.141 03:56:10 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:35.400 03:56:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:12:35.400 03:56:10 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:12:35.658 true 00:12:35.658 03:56:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67553 00:12:35.658 03:56:10 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:36.594 03:56:11 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:36.594 03:56:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:12:36.594 03:56:11 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:12:36.853 true 00:12:36.853 03:56:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67553 00:12:36.853 03:56:11 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:37.113 03:56:12 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:37.372 03:56:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:12:37.372 03:56:12 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:12:37.631 true 00:12:37.631 03:56:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67553 00:12:37.631 03:56:12 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:38.566 03:56:13 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:38.566 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:38.825 03:56:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:12:38.825 03:56:13 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:12:39.083 true 00:12:39.083 03:56:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67553 00:12:39.083 03:56:14 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:39.342 03:56:14 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:39.909 03:56:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:12:39.909 03:56:14 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:12:40.168 true 00:12:40.168 03:56:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67553 00:12:40.168 03:56:15 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.427 03:56:15 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:40.686 03:56:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:12:40.686 03:56:15 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:12:40.944 true 00:12:40.944 03:56:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67553 00:12:40.944 03:56:16 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.215 03:56:16 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:41.474 03:56:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:12:41.474 03:56:16 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:12:41.733 true 00:12:41.733 03:56:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67553 00:12:41.733 03:56:16 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.669 03:56:17 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:42.669 03:56:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:12:42.669 03:56:17 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:12:42.928 true 00:12:42.928 03:56:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67553 00:12:42.928 03:56:18 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.495 03:56:18 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:43.496 03:56:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:12:43.496 03:56:18 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:12:43.754 true 00:12:43.754 03:56:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67553 00:12:43.754 03:56:18 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.012 03:56:19 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:44.271 03:56:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:12:44.271 03:56:19 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:12:44.529 true 00:12:44.529 03:56:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67553 00:12:44.529 03:56:19 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.464 03:56:20 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:45.722 03:56:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:12:45.722 03:56:20 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:12:45.981 true 00:12:45.981 03:56:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67553 00:12:45.981 03:56:21 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.239 03:56:21 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:46.497 03:56:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:12:46.497 03:56:21 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:12:46.756 true 00:12:46.756 03:56:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67553 00:12:46.756 03:56:21 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.691 03:56:22 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:48.059 03:56:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:12:48.059 03:56:22 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:12:48.059 true 00:12:48.059 03:56:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67553 00:12:48.059 03:56:23 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.322 03:56:23 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:48.581 03:56:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:12:48.581 03:56:23 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:12:48.839 true 00:12:48.839 Initializing NVMe Controllers 00:12:48.839 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:48.839 Controller IO queue size 128, less than required. 00:12:48.839 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:48.839 Controller IO queue size 128, less than required. 00:12:48.839 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:48.839 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:48.839 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:12:48.839 Initialization complete. Launching workers. 00:12:48.839 ======================================================== 00:12:48.839 Latency(us) 00:12:48.839 Device Information : IOPS MiB/s Average min max 00:12:48.839 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 566.43 0.28 99831.65 2620.48 1110345.64 00:12:48.839 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10970.66 5.36 11666.95 2176.48 614574.97 00:12:48.839 ======================================================== 00:12:48.839 Total : 11537.09 5.63 15995.55 2176.48 1110345.64 00:12:48.839 00:12:48.840 03:56:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67553 00:12:48.840 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (67553) - No such process 00:12:48.840 03:56:23 -- target/ns_hotplug_stress.sh@53 -- # wait 67553 00:12:48.840 03:56:23 -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.098 03:56:24 -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:49.357 03:56:24 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:12:49.357 03:56:24 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:12:49.357 03:56:24 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:12:49.357 03:56:24 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:49.357 03:56:24 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:12:49.616 null0 00:12:49.616 03:56:24 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:49.616 03:56:24 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:49.616 03:56:24 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:12:49.616 null1 00:12:49.875 03:56:24 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:49.875 03:56:24 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:49.875 03:56:24 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:12:50.134 null2 00:12:50.134 03:56:25 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:50.134 03:56:25 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:50.134 03:56:25 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:12:50.134 null3 00:12:50.134 03:56:25 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:50.134 03:56:25 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:50.134 03:56:25 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:12:50.393 null4 00:12:50.393 03:56:25 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:50.393 03:56:25 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:50.393 03:56:25 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:12:50.652 null5 00:12:50.652 03:56:25 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:50.652 03:56:25 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:50.652 03:56:25 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:12:50.911 null6 00:12:50.911 03:56:25 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:50.911 03:56:25 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:50.911 03:56:25 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:12:51.169 null7 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:12:51.169 03:56:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:51.170 03:56:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:51.170 03:56:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:12:51.170 03:56:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:51.170 03:56:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:51.170 03:56:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:51.170 03:56:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:51.170 03:56:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:51.170 03:56:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:51.170 03:56:26 -- target/ns_hotplug_stress.sh@66 -- # wait 68578 68580 68582 68583 68585 68587 68590 68591 00:12:51.170 03:56:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:12:51.170 03:56:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:12:51.170 03:56:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:12:51.170 03:56:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:51.170 03:56:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:51.170 03:56:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:51.170 03:56:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:51.170 03:56:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:51.170 03:56:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:51.429 03:56:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:51.429 03:56:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:51.429 03:56:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:51.429 03:56:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:51.429 03:56:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:51.429 03:56:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.429 03:56:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:51.688 03:56:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:51.688 03:56:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:51.688 03:56:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:51.688 03:56:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:51.688 03:56:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:51.688 03:56:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:51.688 03:56:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:51.688 03:56:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:51.688 03:56:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:51.688 03:56:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:51.688 03:56:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:51.688 03:56:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:51.688 03:56:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:51.688 03:56:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:51.688 03:56:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:51.688 03:56:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:51.688 03:56:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:51.688 03:56:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:51.688 03:56:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:51.947 03:56:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:51.947 03:56:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:51.947 03:56:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:51.947 03:56:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:51.947 03:56:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:51.947 03:56:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:51.947 03:56:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:51.947 03:56:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:51.947 03:56:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:51.947 03:56:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.947 03:56:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:51.947 03:56:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:52.206 03:56:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:52.206 03:56:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:52.206 03:56:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:52.206 03:56:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:52.206 03:56:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:52.206 03:56:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:52.206 03:56:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:52.206 03:56:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:52.206 03:56:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:52.206 03:56:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:52.206 03:56:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:52.206 03:56:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:52.206 03:56:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:52.206 03:56:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:52.206 03:56:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:52.206 03:56:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:52.206 03:56:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:52.206 03:56:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:52.206 03:56:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:52.206 03:56:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:52.465 03:56:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:52.465 03:56:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:52.465 03:56:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:52.465 03:56:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:52.465 03:56:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:52.465 03:56:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:52.465 03:56:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:52.465 03:56:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:52.465 03:56:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:52.465 03:56:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:52.724 03:56:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.724 03:56:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:52.724 03:56:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:52.724 03:56:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:52.724 03:56:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:52.724 03:56:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:52.724 03:56:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:52.724 03:56:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:52.724 03:56:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:52.724 03:56:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:52.724 03:56:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:52.724 03:56:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:52.724 03:56:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:52.724 03:56:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:52.724 03:56:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:52.724 03:56:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:52.983 03:56:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:52.983 03:56:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:52.983 03:56:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:52.983 03:56:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:52.983 03:56:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:52.983 03:56:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:52.983 03:56:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:52.983 03:56:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:52.983 03:56:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:52.983 03:56:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:52.983 03:56:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:52.983 03:56:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:52.983 03:56:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:52.983 03:56:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:52.983 03:56:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:52.983 03:56:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:52.983 03:56:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.243 03:56:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:53.243 03:56:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:53.243 03:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.243 03:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.243 03:56:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:53.243 03:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.243 03:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.243 03:56:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:53.243 03:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.243 03:56:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:53.243 03:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.243 03:56:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:53.243 03:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.243 03:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.243 03:56:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:53.502 03:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.502 03:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.502 03:56:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:53.502 03:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.502 03:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.502 03:56:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:53.502 03:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.502 03:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.502 03:56:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:53.502 03:56:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:53.502 03:56:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:53.502 03:56:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:53.502 03:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.502 03:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.502 03:56:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:53.761 03:56:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:53.761 03:56:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:53.761 03:56:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.761 03:56:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:53.761 03:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.761 03:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.761 03:56:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:53.761 03:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.761 03:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.761 03:56:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:53.761 03:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.761 03:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.761 03:56:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:53.761 03:56:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:53.761 03:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.761 03:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.761 03:56:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:54.020 03:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.020 03:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.020 03:56:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:54.020 03:56:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:54.020 03:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.020 03:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.020 03:56:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:54.020 03:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.020 03:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.020 03:56:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:54.020 03:56:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:54.020 03:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.020 03:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.020 03:56:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:54.020 03:56:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:54.020 03:56:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:54.279 03:56:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.279 03:56:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:54.279 03:56:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:54.279 03:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.279 03:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.279 03:56:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:54.279 03:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.279 03:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.279 03:56:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:54.279 03:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.279 03:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.279 03:56:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:54.279 03:56:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:54.279 03:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.279 03:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.279 03:56:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:54.538 03:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.538 03:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.538 03:56:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:54.538 03:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.538 03:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.538 03:56:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:54.538 03:56:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:54.538 03:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.538 03:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.538 03:56:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:54.538 03:56:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:54.538 03:56:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:54.538 03:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.538 03:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.538 03:56:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:54.538 03:56:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:54.796 03:56:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.796 03:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.796 03:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.796 03:56:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:54.796 03:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.796 03:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.796 03:56:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:54.796 03:56:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:54.796 03:56:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:54.796 03:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.796 03:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.796 03:56:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:55.055 03:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.055 03:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.055 03:56:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:55.055 03:56:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:55.055 03:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.055 03:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.055 03:56:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:55.055 03:56:30 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:55.055 03:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.055 03:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.055 03:56:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:55.055 03:56:30 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:55.055 03:56:30 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:55.055 03:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.055 03:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.055 03:56:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:55.314 03:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.314 03:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.314 03:56:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:55.314 03:56:30 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.314 03:56:30 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:55.314 03:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.314 03:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.314 03:56:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:55.314 03:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.314 03:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.314 03:56:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:55.314 03:56:30 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:55.314 03:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.314 03:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.314 03:56:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:55.314 03:56:30 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:55.573 03:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.573 03:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.573 03:56:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:55.573 03:56:30 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:55.573 03:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.573 03:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.573 03:56:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:55.573 03:56:30 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:55.573 03:56:30 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:55.573 03:56:30 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:55.573 03:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.573 03:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.573 03:56:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:55.832 03:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.832 03:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.832 03:56:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:55.832 03:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.832 03:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.832 03:56:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:55.832 03:56:30 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.832 03:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.832 03:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.832 03:56:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:55.832 03:56:30 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:55.832 03:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.832 03:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.832 03:56:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:55.832 03:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.832 03:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.832 03:56:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:56.091 03:56:30 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:56.091 03:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:56.091 03:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:56.091 03:56:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:56.091 03:56:30 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:56.091 03:56:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:56.091 03:56:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:56.091 03:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:56.091 03:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:56.091 03:56:31 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:56.091 03:56:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:56.091 03:56:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:56.350 03:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:56.350 03:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:56.350 03:56:31 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:56.350 03:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:56.350 03:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:56.350 03:56:31 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:56.350 03:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:56.350 03:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:56.350 03:56:31 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:56.350 03:56:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:56.350 03:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:56.350 03:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:56.350 03:56:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:56.350 03:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:56.350 03:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:56.350 03:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:56.350 03:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:56.609 03:56:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:56.609 03:56:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:56.609 03:56:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:56.609 03:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:56.609 03:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:56.609 03:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:56.609 03:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:56.609 03:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:56.609 03:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:56.609 03:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:56.609 03:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:56.868 03:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:56.868 03:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:56.868 03:56:31 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:56.868 03:56:31 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:12:56.868 03:56:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:56.868 03:56:31 -- nvmf/common.sh@116 -- # sync 00:12:56.868 03:56:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:56.868 03:56:31 -- nvmf/common.sh@119 -- # set +e 00:12:56.868 03:56:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:56.868 03:56:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:56.868 rmmod nvme_tcp 00:12:56.868 rmmod nvme_fabrics 00:12:56.868 rmmod nvme_keyring 00:12:56.868 03:56:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:56.868 03:56:31 -- nvmf/common.sh@123 -- # set -e 00:12:56.868 03:56:31 -- nvmf/common.sh@124 -- # return 0 00:12:56.868 03:56:31 -- nvmf/common.sh@477 -- # '[' -n 67421 ']' 00:12:56.868 03:56:31 -- nvmf/common.sh@478 -- # killprocess 67421 00:12:56.868 03:56:31 -- common/autotest_common.sh@936 -- # '[' -z 67421 ']' 00:12:56.868 03:56:31 -- common/autotest_common.sh@940 -- # kill -0 67421 00:12:56.868 03:56:31 -- common/autotest_common.sh@941 -- # uname 00:12:56.868 03:56:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:56.868 03:56:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67421 00:12:56.868 03:56:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:56.868 03:56:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:56.868 killing process with pid 67421 00:12:56.868 03:56:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67421' 00:12:56.868 03:56:31 -- common/autotest_common.sh@955 -- # kill 67421 00:12:56.868 03:56:31 -- common/autotest_common.sh@960 -- # wait 67421 00:12:57.127 03:56:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:57.127 03:56:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:57.127 03:56:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:57.127 03:56:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:57.127 03:56:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:57.127 03:56:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.127 03:56:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:57.127 03:56:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.127 03:56:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:57.127 ************************************ 00:12:57.127 END TEST nvmf_ns_hotplug_stress 00:12:57.127 ************************************ 00:12:57.127 00:12:57.127 real 0m42.211s 00:12:57.127 user 3m24.387s 00:12:57.127 sys 0m12.440s 00:12:57.127 03:56:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:57.127 03:56:32 -- common/autotest_common.sh@10 -- # set +x 00:12:57.386 03:56:32 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:57.386 03:56:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:57.386 03:56:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:57.386 03:56:32 -- common/autotest_common.sh@10 -- # set +x 00:12:57.386 ************************************ 00:12:57.386 START TEST nvmf_connect_stress 00:12:57.386 ************************************ 00:12:57.386 03:56:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:57.386 * Looking for test storage... 00:12:57.386 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:57.386 03:56:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:57.386 03:56:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:57.386 03:56:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:57.386 03:56:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:57.386 03:56:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:57.386 03:56:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:57.387 03:56:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:57.387 03:56:32 -- scripts/common.sh@335 -- # IFS=.-: 00:12:57.387 03:56:32 -- scripts/common.sh@335 -- # read -ra ver1 00:12:57.387 03:56:32 -- scripts/common.sh@336 -- # IFS=.-: 00:12:57.387 03:56:32 -- scripts/common.sh@336 -- # read -ra ver2 00:12:57.387 03:56:32 -- scripts/common.sh@337 -- # local 'op=<' 00:12:57.387 03:56:32 -- scripts/common.sh@339 -- # ver1_l=2 00:12:57.387 03:56:32 -- scripts/common.sh@340 -- # ver2_l=1 00:12:57.387 03:56:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:57.387 03:56:32 -- scripts/common.sh@343 -- # case "$op" in 00:12:57.387 03:56:32 -- scripts/common.sh@344 -- # : 1 00:12:57.387 03:56:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:57.387 03:56:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:57.387 03:56:32 -- scripts/common.sh@364 -- # decimal 1 00:12:57.387 03:56:32 -- scripts/common.sh@352 -- # local d=1 00:12:57.387 03:56:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:57.387 03:56:32 -- scripts/common.sh@354 -- # echo 1 00:12:57.387 03:56:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:57.387 03:56:32 -- scripts/common.sh@365 -- # decimal 2 00:12:57.387 03:56:32 -- scripts/common.sh@352 -- # local d=2 00:12:57.387 03:56:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:57.387 03:56:32 -- scripts/common.sh@354 -- # echo 2 00:12:57.387 03:56:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:57.387 03:56:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:57.387 03:56:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:57.387 03:56:32 -- scripts/common.sh@367 -- # return 0 00:12:57.387 03:56:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:57.387 03:56:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:57.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.387 --rc genhtml_branch_coverage=1 00:12:57.387 --rc genhtml_function_coverage=1 00:12:57.387 --rc genhtml_legend=1 00:12:57.387 --rc geninfo_all_blocks=1 00:12:57.387 --rc geninfo_unexecuted_blocks=1 00:12:57.387 00:12:57.387 ' 00:12:57.387 03:56:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:57.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.387 --rc genhtml_branch_coverage=1 00:12:57.387 --rc genhtml_function_coverage=1 00:12:57.387 --rc genhtml_legend=1 00:12:57.387 --rc geninfo_all_blocks=1 00:12:57.387 --rc geninfo_unexecuted_blocks=1 00:12:57.387 00:12:57.387 ' 00:12:57.387 03:56:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:57.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.387 --rc genhtml_branch_coverage=1 00:12:57.387 --rc genhtml_function_coverage=1 00:12:57.387 --rc genhtml_legend=1 00:12:57.387 --rc geninfo_all_blocks=1 00:12:57.387 --rc geninfo_unexecuted_blocks=1 00:12:57.387 00:12:57.387 ' 00:12:57.387 03:56:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:57.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.387 --rc genhtml_branch_coverage=1 00:12:57.387 --rc genhtml_function_coverage=1 00:12:57.387 --rc genhtml_legend=1 00:12:57.387 --rc geninfo_all_blocks=1 00:12:57.387 --rc geninfo_unexecuted_blocks=1 00:12:57.387 00:12:57.387 ' 00:12:57.387 03:56:32 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:57.387 03:56:32 -- nvmf/common.sh@7 -- # uname -s 00:12:57.387 03:56:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:57.387 03:56:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:57.387 03:56:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:57.387 03:56:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:57.387 03:56:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:57.387 03:56:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:57.387 03:56:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:57.387 03:56:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:57.387 03:56:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:57.387 03:56:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:57.387 03:56:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:12:57.387 03:56:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:12:57.387 03:56:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:57.387 03:56:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:57.387 03:56:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:57.387 03:56:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:57.387 03:56:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:57.387 03:56:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:57.387 03:56:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:57.387 03:56:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.387 03:56:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.387 03:56:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.387 03:56:32 -- paths/export.sh@5 -- # export PATH 00:12:57.387 03:56:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.387 03:56:32 -- nvmf/common.sh@46 -- # : 0 00:12:57.387 03:56:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:57.387 03:56:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:57.387 03:56:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:57.387 03:56:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:57.387 03:56:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:57.387 03:56:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:57.387 03:56:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:57.387 03:56:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:57.387 03:56:32 -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:57.387 03:56:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:57.387 03:56:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:57.387 03:56:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:57.387 03:56:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:57.387 03:56:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:57.387 03:56:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.387 03:56:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:57.387 03:56:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.387 03:56:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:57.387 03:56:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:57.387 03:56:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:57.387 03:56:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:57.387 03:56:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:57.387 03:56:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:57.387 03:56:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:57.387 03:56:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:57.387 03:56:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:57.387 03:56:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:57.387 03:56:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:57.387 03:56:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:57.387 03:56:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:57.387 03:56:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:57.387 03:56:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:57.387 03:56:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:57.387 03:56:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:57.387 03:56:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:57.387 03:56:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:57.387 03:56:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:57.387 Cannot find device "nvmf_tgt_br" 00:12:57.387 03:56:32 -- nvmf/common.sh@154 -- # true 00:12:57.387 03:56:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:57.387 Cannot find device "nvmf_tgt_br2" 00:12:57.387 03:56:32 -- nvmf/common.sh@155 -- # true 00:12:57.387 03:56:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:57.387 03:56:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:57.387 Cannot find device "nvmf_tgt_br" 00:12:57.387 03:56:32 -- nvmf/common.sh@157 -- # true 00:12:57.387 03:56:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:57.646 Cannot find device "nvmf_tgt_br2" 00:12:57.646 03:56:32 -- nvmf/common.sh@158 -- # true 00:12:57.646 03:56:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:57.646 03:56:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:57.646 03:56:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:57.646 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:57.646 03:56:32 -- nvmf/common.sh@161 -- # true 00:12:57.646 03:56:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:57.646 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:57.646 03:56:32 -- nvmf/common.sh@162 -- # true 00:12:57.646 03:56:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:57.646 03:56:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:57.646 03:56:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:57.646 03:56:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:57.646 03:56:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:57.646 03:56:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:57.646 03:56:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:57.646 03:56:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:57.646 03:56:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:57.646 03:56:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:57.646 03:56:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:57.646 03:56:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:57.646 03:56:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:57.646 03:56:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:57.647 03:56:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:57.647 03:56:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:57.647 03:56:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:57.647 03:56:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:57.647 03:56:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:57.647 03:56:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:57.647 03:56:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:57.647 03:56:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:57.647 03:56:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:57.647 03:56:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:57.647 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:57.647 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:12:57.647 00:12:57.647 --- 10.0.0.2 ping statistics --- 00:12:57.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.647 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:12:57.647 03:56:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:57.647 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:57.647 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:12:57.647 00:12:57.647 --- 10.0.0.3 ping statistics --- 00:12:57.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.647 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:12:57.647 03:56:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:57.905 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:57.905 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:12:57.905 00:12:57.905 --- 10.0.0.1 ping statistics --- 00:12:57.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.905 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:12:57.905 03:56:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:57.905 03:56:32 -- nvmf/common.sh@421 -- # return 0 00:12:57.905 03:56:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:57.905 03:56:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:57.905 03:56:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:57.905 03:56:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:57.905 03:56:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:57.905 03:56:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:57.905 03:56:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:57.905 03:56:32 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:57.905 03:56:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:57.905 03:56:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:57.905 03:56:32 -- common/autotest_common.sh@10 -- # set +x 00:12:57.905 03:56:32 -- nvmf/common.sh@469 -- # nvmfpid=69913 00:12:57.905 03:56:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:57.905 03:56:32 -- nvmf/common.sh@470 -- # waitforlisten 69913 00:12:57.905 03:56:32 -- common/autotest_common.sh@829 -- # '[' -z 69913 ']' 00:12:57.905 03:56:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.905 03:56:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:57.905 03:56:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.905 03:56:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:57.905 03:56:32 -- common/autotest_common.sh@10 -- # set +x 00:12:57.906 [2024-11-08 03:56:32.843318] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:57.906 [2024-11-08 03:56:32.843408] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.906 [2024-11-08 03:56:32.985633] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:58.164 [2024-11-08 03:56:33.080960] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:58.164 [2024-11-08 03:56:33.081154] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:58.164 [2024-11-08 03:56:33.081172] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:58.164 [2024-11-08 03:56:33.081184] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:58.164 [2024-11-08 03:56:33.081690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:58.164 [2024-11-08 03:56:33.082551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:58.164 [2024-11-08 03:56:33.082593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.730 03:56:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:58.730 03:56:33 -- common/autotest_common.sh@862 -- # return 0 00:12:58.730 03:56:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:58.730 03:56:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:58.730 03:56:33 -- common/autotest_common.sh@10 -- # set +x 00:12:58.730 03:56:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:58.730 03:56:33 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:58.730 03:56:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.730 03:56:33 -- common/autotest_common.sh@10 -- # set +x 00:12:58.730 [2024-11-08 03:56:33.822028] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:58.730 03:56:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.730 03:56:33 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:58.730 03:56:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.730 03:56:33 -- common/autotest_common.sh@10 -- # set +x 00:12:58.730 03:56:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.730 03:56:33 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.730 03:56:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.730 03:56:33 -- common/autotest_common.sh@10 -- # set +x 00:12:58.989 [2024-11-08 03:56:33.839902] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.989 03:56:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.989 03:56:33 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:58.989 03:56:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.989 03:56:33 -- common/autotest_common.sh@10 -- # set +x 00:12:58.989 NULL1 00:12:58.989 03:56:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.989 03:56:33 -- target/connect_stress.sh@21 -- # PERF_PID=69965 00:12:58.989 03:56:33 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:58.989 03:56:33 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:12:58.989 03:56:33 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:12:58.989 03:56:33 -- target/connect_stress.sh@27 -- # seq 1 20 00:12:58.989 03:56:33 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.989 03:56:33 -- target/connect_stress.sh@28 -- # cat 00:12:58.989 03:56:33 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.989 03:56:33 -- target/connect_stress.sh@28 -- # cat 00:12:58.989 03:56:33 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.989 03:56:33 -- target/connect_stress.sh@28 -- # cat 00:12:58.989 03:56:33 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.989 03:56:33 -- target/connect_stress.sh@28 -- # cat 00:12:58.989 03:56:33 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.989 03:56:33 -- target/connect_stress.sh@28 -- # cat 00:12:58.989 03:56:33 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.989 03:56:33 -- target/connect_stress.sh@28 -- # cat 00:12:58.989 03:56:33 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.989 03:56:33 -- target/connect_stress.sh@28 -- # cat 00:12:58.989 03:56:33 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.989 03:56:33 -- target/connect_stress.sh@28 -- # cat 00:12:58.989 03:56:33 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.989 03:56:33 -- target/connect_stress.sh@28 -- # cat 00:12:58.989 03:56:33 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.989 03:56:33 -- target/connect_stress.sh@28 -- # cat 00:12:58.989 03:56:33 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.989 03:56:33 -- target/connect_stress.sh@28 -- # cat 00:12:58.989 03:56:33 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.989 03:56:33 -- target/connect_stress.sh@28 -- # cat 00:12:58.989 03:56:33 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.989 03:56:33 -- target/connect_stress.sh@28 -- # cat 00:12:58.989 03:56:33 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.989 03:56:33 -- target/connect_stress.sh@28 -- # cat 00:12:58.989 03:56:33 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.989 03:56:33 -- target/connect_stress.sh@28 -- # cat 00:12:58.989 03:56:33 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.989 03:56:33 -- target/connect_stress.sh@28 -- # cat 00:12:58.989 03:56:33 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.989 03:56:33 -- target/connect_stress.sh@28 -- # cat 00:12:58.989 03:56:33 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.989 03:56:33 -- target/connect_stress.sh@28 -- # cat 00:12:58.989 03:56:33 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.989 03:56:33 -- target/connect_stress.sh@28 -- # cat 00:12:58.989 03:56:33 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.989 03:56:33 -- target/connect_stress.sh@28 -- # cat 00:12:58.989 03:56:33 -- target/connect_stress.sh@34 -- # kill -0 69965 00:12:58.989 03:56:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.989 03:56:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.989 03:56:33 -- common/autotest_common.sh@10 -- # set +x 00:12:59.247 03:56:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.247 03:56:34 -- target/connect_stress.sh@34 -- # kill -0 69965 00:12:59.247 03:56:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.247 03:56:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.247 03:56:34 -- common/autotest_common.sh@10 -- # set +x 00:12:59.506 03:56:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.506 03:56:34 -- target/connect_stress.sh@34 -- # kill -0 69965 00:12:59.506 03:56:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.506 03:56:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.506 03:56:34 -- common/autotest_common.sh@10 -- # set +x 00:13:00.073 03:56:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.073 03:56:34 -- target/connect_stress.sh@34 -- # kill -0 69965 00:13:00.073 03:56:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.073 03:56:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.073 03:56:34 -- common/autotest_common.sh@10 -- # set +x 00:13:00.332 03:56:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.332 03:56:35 -- target/connect_stress.sh@34 -- # kill -0 69965 00:13:00.332 03:56:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.332 03:56:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.332 03:56:35 -- common/autotest_common.sh@10 -- # set +x 00:13:00.591 03:56:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.591 03:56:35 -- target/connect_stress.sh@34 -- # kill -0 69965 00:13:00.591 03:56:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.591 03:56:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.591 03:56:35 -- common/autotest_common.sh@10 -- # set +x 00:13:00.850 03:56:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.850 03:56:35 -- target/connect_stress.sh@34 -- # kill -0 69965 00:13:00.850 03:56:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.850 03:56:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.850 03:56:35 -- common/autotest_common.sh@10 -- # set +x 00:13:01.109 03:56:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.109 03:56:36 -- target/connect_stress.sh@34 -- # kill -0 69965 00:13:01.109 03:56:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.109 03:56:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.109 03:56:36 -- common/autotest_common.sh@10 -- # set +x 00:13:01.678 03:56:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.678 03:56:36 -- target/connect_stress.sh@34 -- # kill -0 69965 00:13:01.678 03:56:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.678 03:56:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.678 03:56:36 -- common/autotest_common.sh@10 -- # set +x 00:13:01.937 03:56:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.937 03:56:36 -- target/connect_stress.sh@34 -- # kill -0 69965 00:13:01.937 03:56:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.937 03:56:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.937 03:56:36 -- common/autotest_common.sh@10 -- # set +x 00:13:02.197 03:56:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.197 03:56:37 -- target/connect_stress.sh@34 -- # kill -0 69965 00:13:02.197 03:56:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.197 03:56:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.197 03:56:37 -- common/autotest_common.sh@10 -- # set +x 00:13:02.456 03:56:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.456 03:56:37 -- target/connect_stress.sh@34 -- # kill -0 69965 00:13:02.456 03:56:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.456 03:56:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.456 03:56:37 -- common/autotest_common.sh@10 -- # set +x 00:13:02.715 03:56:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.715 03:56:37 -- target/connect_stress.sh@34 -- # kill -0 69965 00:13:02.715 03:56:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.715 03:56:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.715 03:56:37 -- common/autotest_common.sh@10 -- # set +x 00:13:03.308 03:56:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.308 03:56:38 -- target/connect_stress.sh@34 -- # kill -0 69965 00:13:03.308 03:56:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.308 03:56:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.308 03:56:38 -- common/autotest_common.sh@10 -- # set +x 00:13:03.567 03:56:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.567 03:56:38 -- target/connect_stress.sh@34 -- # kill -0 69965 00:13:03.567 03:56:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.567 03:56:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.567 03:56:38 -- common/autotest_common.sh@10 -- # set +x 00:13:03.826 03:56:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.826 03:56:38 -- target/connect_stress.sh@34 -- # kill -0 69965 00:13:03.826 03:56:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.826 03:56:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.826 03:56:38 -- common/autotest_common.sh@10 -- # set +x 00:13:04.085 03:56:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.085 03:56:39 -- target/connect_stress.sh@34 -- # kill -0 69965 00:13:04.085 03:56:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.085 03:56:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.085 03:56:39 -- common/autotest_common.sh@10 -- # set +x 00:13:04.344 03:56:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.344 03:56:39 -- target/connect_stress.sh@34 -- # kill -0 69965 00:13:04.344 03:56:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.344 03:56:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.344 03:56:39 -- common/autotest_common.sh@10 -- # set +x 00:13:04.911 03:56:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.911 03:56:39 -- target/connect_stress.sh@34 -- # kill -0 69965 00:13:04.911 03:56:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.911 03:56:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.911 03:56:39 -- common/autotest_common.sh@10 -- # set +x 00:13:05.170 03:56:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.170 03:56:40 -- target/connect_stress.sh@34 -- # kill -0 69965 00:13:05.170 03:56:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.170 03:56:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.170 03:56:40 -- common/autotest_common.sh@10 -- # set +x 00:13:05.429 03:56:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.429 03:56:40 -- target/connect_stress.sh@34 -- # kill -0 69965 00:13:05.429 03:56:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.429 03:56:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.429 03:56:40 -- common/autotest_common.sh@10 -- # set +x 00:13:05.688 03:56:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.688 03:56:40 -- target/connect_stress.sh@34 -- # kill -0 69965 00:13:05.688 03:56:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.688 03:56:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.688 03:56:40 -- common/autotest_common.sh@10 -- # set +x 00:13:05.947 03:56:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.947 03:56:41 -- target/connect_stress.sh@34 -- # kill -0 69965 00:13:05.947 03:56:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.947 03:56:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.947 03:56:41 -- common/autotest_common.sh@10 -- # set +x 00:13:06.514 03:56:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.514 03:56:41 -- target/connect_stress.sh@34 -- # kill -0 69965 00:13:06.514 03:56:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.514 03:56:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.514 03:56:41 -- common/autotest_common.sh@10 -- # set +x 00:13:06.773 03:56:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.773 03:56:41 -- target/connect_stress.sh@34 -- # kill -0 69965 00:13:06.773 03:56:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.773 03:56:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.773 03:56:41 -- common/autotest_common.sh@10 -- # set +x 00:13:07.032 03:56:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.032 03:56:41 -- target/connect_stress.sh@34 -- # kill -0 69965 00:13:07.032 03:56:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.032 03:56:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.032 03:56:41 -- common/autotest_common.sh@10 -- # set +x 00:13:07.291 03:56:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.291 03:56:42 -- target/connect_stress.sh@34 -- # kill -0 69965 00:13:07.291 03:56:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.291 03:56:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.291 03:56:42 -- common/autotest_common.sh@10 -- # set +x 00:13:07.549 03:56:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.549 03:56:42 -- target/connect_stress.sh@34 -- # kill -0 69965 00:13:07.549 03:56:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.549 03:56:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.549 03:56:42 -- common/autotest_common.sh@10 -- # set +x 00:13:08.116 03:56:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.116 03:56:42 -- target/connect_stress.sh@34 -- # kill -0 69965 00:13:08.116 03:56:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.116 03:56:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.116 03:56:42 -- common/autotest_common.sh@10 -- # set +x 00:13:08.375 03:56:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.375 03:56:43 -- target/connect_stress.sh@34 -- # kill -0 69965 00:13:08.375 03:56:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.375 03:56:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.375 03:56:43 -- common/autotest_common.sh@10 -- # set +x 00:13:08.633 03:56:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.633 03:56:43 -- target/connect_stress.sh@34 -- # kill -0 69965 00:13:08.633 03:56:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.633 03:56:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.633 03:56:43 -- common/autotest_common.sh@10 -- # set +x 00:13:08.892 03:56:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.892 03:56:43 -- target/connect_stress.sh@34 -- # kill -0 69965 00:13:08.892 03:56:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.892 03:56:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.892 03:56:43 -- common/autotest_common.sh@10 -- # set +x 00:13:09.150 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:09.150 03:56:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.150 03:56:44 -- target/connect_stress.sh@34 -- # kill -0 69965 00:13:09.150 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (69965) - No such process 00:13:09.150 03:56:44 -- target/connect_stress.sh@38 -- # wait 69965 00:13:09.150 03:56:44 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:09.150 03:56:44 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:09.150 03:56:44 -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:09.150 03:56:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:09.150 03:56:44 -- nvmf/common.sh@116 -- # sync 00:13:09.409 03:56:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:09.409 03:56:44 -- nvmf/common.sh@119 -- # set +e 00:13:09.409 03:56:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:09.409 03:56:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:09.409 rmmod nvme_tcp 00:13:09.409 rmmod nvme_fabrics 00:13:09.409 rmmod nvme_keyring 00:13:09.409 03:56:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:09.409 03:56:44 -- nvmf/common.sh@123 -- # set -e 00:13:09.409 03:56:44 -- nvmf/common.sh@124 -- # return 0 00:13:09.409 03:56:44 -- nvmf/common.sh@477 -- # '[' -n 69913 ']' 00:13:09.409 03:56:44 -- nvmf/common.sh@478 -- # killprocess 69913 00:13:09.409 03:56:44 -- common/autotest_common.sh@936 -- # '[' -z 69913 ']' 00:13:09.409 03:56:44 -- common/autotest_common.sh@940 -- # kill -0 69913 00:13:09.409 03:56:44 -- common/autotest_common.sh@941 -- # uname 00:13:09.409 03:56:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:09.409 03:56:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69913 00:13:09.409 killing process with pid 69913 00:13:09.409 03:56:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:09.409 03:56:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:09.409 03:56:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69913' 00:13:09.409 03:56:44 -- common/autotest_common.sh@955 -- # kill 69913 00:13:09.409 03:56:44 -- common/autotest_common.sh@960 -- # wait 69913 00:13:09.668 03:56:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:09.668 03:56:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:09.668 03:56:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:09.668 03:56:44 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:09.668 03:56:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:09.668 03:56:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.668 03:56:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:09.668 03:56:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.668 03:56:44 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:09.927 00:13:09.927 real 0m12.529s 00:13:09.927 user 0m41.404s 00:13:09.927 sys 0m3.182s 00:13:09.927 03:56:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:09.927 03:56:44 -- common/autotest_common.sh@10 -- # set +x 00:13:09.927 ************************************ 00:13:09.927 END TEST nvmf_connect_stress 00:13:09.927 ************************************ 00:13:09.927 03:56:44 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:09.927 03:56:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:09.927 03:56:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:09.927 03:56:44 -- common/autotest_common.sh@10 -- # set +x 00:13:09.927 ************************************ 00:13:09.927 START TEST nvmf_fused_ordering 00:13:09.927 ************************************ 00:13:09.927 03:56:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:09.927 * Looking for test storage... 00:13:09.927 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:09.927 03:56:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:09.927 03:56:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:09.927 03:56:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:09.927 03:56:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:09.927 03:56:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:09.927 03:56:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:09.927 03:56:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:09.927 03:56:44 -- scripts/common.sh@335 -- # IFS=.-: 00:13:09.927 03:56:44 -- scripts/common.sh@335 -- # read -ra ver1 00:13:09.927 03:56:44 -- scripts/common.sh@336 -- # IFS=.-: 00:13:09.927 03:56:44 -- scripts/common.sh@336 -- # read -ra ver2 00:13:09.927 03:56:44 -- scripts/common.sh@337 -- # local 'op=<' 00:13:09.927 03:56:44 -- scripts/common.sh@339 -- # ver1_l=2 00:13:09.927 03:56:44 -- scripts/common.sh@340 -- # ver2_l=1 00:13:09.927 03:56:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:09.927 03:56:44 -- scripts/common.sh@343 -- # case "$op" in 00:13:09.927 03:56:44 -- scripts/common.sh@344 -- # : 1 00:13:09.927 03:56:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:09.927 03:56:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:09.927 03:56:44 -- scripts/common.sh@364 -- # decimal 1 00:13:09.927 03:56:44 -- scripts/common.sh@352 -- # local d=1 00:13:09.927 03:56:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:09.927 03:56:44 -- scripts/common.sh@354 -- # echo 1 00:13:09.927 03:56:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:09.927 03:56:44 -- scripts/common.sh@365 -- # decimal 2 00:13:09.927 03:56:44 -- scripts/common.sh@352 -- # local d=2 00:13:09.927 03:56:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:09.927 03:56:44 -- scripts/common.sh@354 -- # echo 2 00:13:09.927 03:56:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:09.927 03:56:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:09.927 03:56:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:09.927 03:56:45 -- scripts/common.sh@367 -- # return 0 00:13:09.927 03:56:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:09.927 03:56:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:09.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.927 --rc genhtml_branch_coverage=1 00:13:09.927 --rc genhtml_function_coverage=1 00:13:09.927 --rc genhtml_legend=1 00:13:09.927 --rc geninfo_all_blocks=1 00:13:09.927 --rc geninfo_unexecuted_blocks=1 00:13:09.927 00:13:09.927 ' 00:13:09.927 03:56:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:09.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.927 --rc genhtml_branch_coverage=1 00:13:09.927 --rc genhtml_function_coverage=1 00:13:09.927 --rc genhtml_legend=1 00:13:09.927 --rc geninfo_all_blocks=1 00:13:09.927 --rc geninfo_unexecuted_blocks=1 00:13:09.927 00:13:09.927 ' 00:13:09.927 03:56:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:09.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.927 --rc genhtml_branch_coverage=1 00:13:09.927 --rc genhtml_function_coverage=1 00:13:09.927 --rc genhtml_legend=1 00:13:09.927 --rc geninfo_all_blocks=1 00:13:09.927 --rc geninfo_unexecuted_blocks=1 00:13:09.927 00:13:09.927 ' 00:13:09.927 03:56:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:09.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.928 --rc genhtml_branch_coverage=1 00:13:09.928 --rc genhtml_function_coverage=1 00:13:09.928 --rc genhtml_legend=1 00:13:09.928 --rc geninfo_all_blocks=1 00:13:09.928 --rc geninfo_unexecuted_blocks=1 00:13:09.928 00:13:09.928 ' 00:13:09.928 03:56:45 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:09.928 03:56:45 -- nvmf/common.sh@7 -- # uname -s 00:13:09.928 03:56:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:09.928 03:56:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:09.928 03:56:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:09.928 03:56:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:09.928 03:56:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:09.928 03:56:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:09.928 03:56:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:09.928 03:56:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:09.928 03:56:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:09.928 03:56:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:09.928 03:56:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:13:09.928 03:56:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:13:09.928 03:56:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:09.928 03:56:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:09.928 03:56:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:09.928 03:56:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:09.928 03:56:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:09.928 03:56:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:09.928 03:56:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:09.928 03:56:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.928 03:56:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.928 03:56:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.928 03:56:45 -- paths/export.sh@5 -- # export PATH 00:13:09.928 03:56:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.928 03:56:45 -- nvmf/common.sh@46 -- # : 0 00:13:09.928 03:56:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:09.928 03:56:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:09.928 03:56:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:09.928 03:56:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:09.928 03:56:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:09.928 03:56:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:09.928 03:56:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:09.928 03:56:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:09.928 03:56:45 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:09.928 03:56:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:09.928 03:56:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:09.928 03:56:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:09.928 03:56:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:09.928 03:56:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:09.928 03:56:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.928 03:56:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:09.928 03:56:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.186 03:56:45 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:10.186 03:56:45 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:10.186 03:56:45 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:10.186 03:56:45 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:10.186 03:56:45 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:10.186 03:56:45 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:10.186 03:56:45 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:10.186 03:56:45 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:10.187 03:56:45 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:10.187 03:56:45 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:10.187 03:56:45 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:10.187 03:56:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:10.187 03:56:45 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:10.187 03:56:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:10.187 03:56:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:10.187 03:56:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:10.187 03:56:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:10.187 03:56:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:10.187 03:56:45 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:10.187 03:56:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:10.187 Cannot find device "nvmf_tgt_br" 00:13:10.187 03:56:45 -- nvmf/common.sh@154 -- # true 00:13:10.187 03:56:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:10.187 Cannot find device "nvmf_tgt_br2" 00:13:10.187 03:56:45 -- nvmf/common.sh@155 -- # true 00:13:10.187 03:56:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:10.187 03:56:45 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:10.187 Cannot find device "nvmf_tgt_br" 00:13:10.187 03:56:45 -- nvmf/common.sh@157 -- # true 00:13:10.187 03:56:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:10.187 Cannot find device "nvmf_tgt_br2" 00:13:10.187 03:56:45 -- nvmf/common.sh@158 -- # true 00:13:10.187 03:56:45 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:10.187 03:56:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:10.187 03:56:45 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:10.187 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:10.187 03:56:45 -- nvmf/common.sh@161 -- # true 00:13:10.187 03:56:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:10.187 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:10.187 03:56:45 -- nvmf/common.sh@162 -- # true 00:13:10.187 03:56:45 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:10.187 03:56:45 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:10.187 03:56:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:10.187 03:56:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:10.187 03:56:45 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:10.187 03:56:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:10.187 03:56:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:10.187 03:56:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:10.187 03:56:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:10.187 03:56:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:10.187 03:56:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:10.187 03:56:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:10.187 03:56:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:10.187 03:56:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:10.187 03:56:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:10.187 03:56:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:10.187 03:56:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:10.187 03:56:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:10.187 03:56:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:10.187 03:56:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:10.187 03:56:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:10.444 03:56:45 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:10.444 03:56:45 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:10.444 03:56:45 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:10.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:10.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:13:10.444 00:13:10.444 --- 10.0.0.2 ping statistics --- 00:13:10.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.444 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:13:10.445 03:56:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:10.445 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:10.445 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:13:10.445 00:13:10.445 --- 10.0.0.3 ping statistics --- 00:13:10.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.445 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:13:10.445 03:56:45 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:10.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:10.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:13:10.445 00:13:10.445 --- 10.0.0.1 ping statistics --- 00:13:10.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.445 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:13:10.445 03:56:45 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:10.445 03:56:45 -- nvmf/common.sh@421 -- # return 0 00:13:10.445 03:56:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:10.445 03:56:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:10.445 03:56:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:10.445 03:56:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:10.445 03:56:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:10.445 03:56:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:10.445 03:56:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:10.445 03:56:45 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:10.445 03:56:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:10.445 03:56:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:10.445 03:56:45 -- common/autotest_common.sh@10 -- # set +x 00:13:10.445 03:56:45 -- nvmf/common.sh@469 -- # nvmfpid=70307 00:13:10.445 03:56:45 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:10.445 03:56:45 -- nvmf/common.sh@470 -- # waitforlisten 70307 00:13:10.445 03:56:45 -- common/autotest_common.sh@829 -- # '[' -z 70307 ']' 00:13:10.445 03:56:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.445 03:56:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:10.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.445 03:56:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.445 03:56:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:10.445 03:56:45 -- common/autotest_common.sh@10 -- # set +x 00:13:10.445 [2024-11-08 03:56:45.400363] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:10.445 [2024-11-08 03:56:45.400487] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:10.445 [2024-11-08 03:56:45.537671] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.703 [2024-11-08 03:56:45.641955] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:10.703 [2024-11-08 03:56:45.642130] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:10.703 [2024-11-08 03:56:45.642147] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:10.703 [2024-11-08 03:56:45.642159] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:10.703 [2024-11-08 03:56:45.642202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.271 03:56:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:11.271 03:56:46 -- common/autotest_common.sh@862 -- # return 0 00:13:11.271 03:56:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:11.271 03:56:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:11.271 03:56:46 -- common/autotest_common.sh@10 -- # set +x 00:13:11.530 03:56:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:11.531 03:56:46 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:11.531 03:56:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.531 03:56:46 -- common/autotest_common.sh@10 -- # set +x 00:13:11.531 [2024-11-08 03:56:46.419502] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:11.531 03:56:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.531 03:56:46 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:11.531 03:56:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.531 03:56:46 -- common/autotest_common.sh@10 -- # set +x 00:13:11.531 03:56:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.531 03:56:46 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.531 03:56:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.531 03:56:46 -- common/autotest_common.sh@10 -- # set +x 00:13:11.531 [2024-11-08 03:56:46.435616] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.531 03:56:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.531 03:56:46 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:11.531 03:56:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.531 03:56:46 -- common/autotest_common.sh@10 -- # set +x 00:13:11.531 NULL1 00:13:11.531 03:56:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.531 03:56:46 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:11.531 03:56:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.531 03:56:46 -- common/autotest_common.sh@10 -- # set +x 00:13:11.531 03:56:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.531 03:56:46 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:11.531 03:56:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.531 03:56:46 -- common/autotest_common.sh@10 -- # set +x 00:13:11.531 03:56:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.531 03:56:46 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:11.531 [2024-11-08 03:56:46.488751] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:11.531 [2024-11-08 03:56:46.488798] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70357 ] 00:13:11.790 Attached to nqn.2016-06.io.spdk:cnode1 00:13:11.790 Namespace ID: 1 size: 1GB 00:13:11.790 fused_ordering(0) 00:13:11.790 fused_ordering(1) 00:13:11.790 fused_ordering(2) 00:13:11.790 fused_ordering(3) 00:13:11.790 fused_ordering(4) 00:13:11.790 fused_ordering(5) 00:13:11.790 fused_ordering(6) 00:13:11.790 fused_ordering(7) 00:13:11.790 fused_ordering(8) 00:13:11.790 fused_ordering(9) 00:13:11.790 fused_ordering(10) 00:13:11.790 fused_ordering(11) 00:13:11.790 fused_ordering(12) 00:13:11.790 fused_ordering(13) 00:13:11.790 fused_ordering(14) 00:13:11.790 fused_ordering(15) 00:13:11.790 fused_ordering(16) 00:13:11.790 fused_ordering(17) 00:13:11.790 fused_ordering(18) 00:13:11.790 fused_ordering(19) 00:13:11.790 fused_ordering(20) 00:13:11.790 fused_ordering(21) 00:13:11.790 fused_ordering(22) 00:13:11.790 fused_ordering(23) 00:13:11.790 fused_ordering(24) 00:13:11.790 fused_ordering(25) 00:13:11.790 fused_ordering(26) 00:13:11.790 fused_ordering(27) 00:13:11.790 fused_ordering(28) 00:13:11.790 fused_ordering(29) 00:13:11.790 fused_ordering(30) 00:13:11.790 fused_ordering(31) 00:13:11.790 fused_ordering(32) 00:13:11.790 fused_ordering(33) 00:13:11.790 fused_ordering(34) 00:13:11.790 fused_ordering(35) 00:13:11.790 fused_ordering(36) 00:13:11.790 fused_ordering(37) 00:13:11.790 fused_ordering(38) 00:13:11.790 fused_ordering(39) 00:13:11.790 fused_ordering(40) 00:13:11.790 fused_ordering(41) 00:13:11.790 fused_ordering(42) 00:13:11.790 fused_ordering(43) 00:13:11.790 fused_ordering(44) 00:13:11.790 fused_ordering(45) 00:13:11.790 fused_ordering(46) 00:13:11.790 fused_ordering(47) 00:13:11.790 fused_ordering(48) 00:13:11.790 fused_ordering(49) 00:13:11.790 fused_ordering(50) 00:13:11.790 fused_ordering(51) 00:13:11.790 fused_ordering(52) 00:13:11.790 fused_ordering(53) 00:13:11.790 fused_ordering(54) 00:13:11.790 fused_ordering(55) 00:13:11.790 fused_ordering(56) 00:13:11.790 fused_ordering(57) 00:13:11.790 fused_ordering(58) 00:13:11.790 fused_ordering(59) 00:13:11.790 fused_ordering(60) 00:13:11.790 fused_ordering(61) 00:13:11.790 fused_ordering(62) 00:13:11.790 fused_ordering(63) 00:13:11.790 fused_ordering(64) 00:13:11.790 fused_ordering(65) 00:13:11.790 fused_ordering(66) 00:13:11.790 fused_ordering(67) 00:13:11.790 fused_ordering(68) 00:13:11.790 fused_ordering(69) 00:13:11.790 fused_ordering(70) 00:13:11.790 fused_ordering(71) 00:13:11.790 fused_ordering(72) 00:13:11.790 fused_ordering(73) 00:13:11.790 fused_ordering(74) 00:13:11.790 fused_ordering(75) 00:13:11.790 fused_ordering(76) 00:13:11.790 fused_ordering(77) 00:13:11.790 fused_ordering(78) 00:13:11.790 fused_ordering(79) 00:13:11.790 fused_ordering(80) 00:13:11.790 fused_ordering(81) 00:13:11.790 fused_ordering(82) 00:13:11.790 fused_ordering(83) 00:13:11.790 fused_ordering(84) 00:13:11.790 fused_ordering(85) 00:13:11.790 fused_ordering(86) 00:13:11.790 fused_ordering(87) 00:13:11.790 fused_ordering(88) 00:13:11.790 fused_ordering(89) 00:13:11.790 fused_ordering(90) 00:13:11.790 fused_ordering(91) 00:13:11.790 fused_ordering(92) 00:13:11.790 fused_ordering(93) 00:13:11.790 fused_ordering(94) 00:13:11.790 fused_ordering(95) 00:13:11.790 fused_ordering(96) 00:13:11.790 fused_ordering(97) 00:13:11.790 fused_ordering(98) 00:13:11.790 fused_ordering(99) 00:13:11.790 fused_ordering(100) 00:13:11.790 fused_ordering(101) 00:13:11.790 fused_ordering(102) 00:13:11.790 fused_ordering(103) 00:13:11.790 fused_ordering(104) 00:13:11.790 fused_ordering(105) 00:13:11.790 fused_ordering(106) 00:13:11.790 fused_ordering(107) 00:13:11.790 fused_ordering(108) 00:13:11.790 fused_ordering(109) 00:13:11.790 fused_ordering(110) 00:13:11.790 fused_ordering(111) 00:13:11.790 fused_ordering(112) 00:13:11.790 fused_ordering(113) 00:13:11.790 fused_ordering(114) 00:13:11.790 fused_ordering(115) 00:13:11.790 fused_ordering(116) 00:13:11.790 fused_ordering(117) 00:13:11.790 fused_ordering(118) 00:13:11.790 fused_ordering(119) 00:13:11.790 fused_ordering(120) 00:13:11.790 fused_ordering(121) 00:13:11.790 fused_ordering(122) 00:13:11.790 fused_ordering(123) 00:13:11.790 fused_ordering(124) 00:13:11.790 fused_ordering(125) 00:13:11.790 fused_ordering(126) 00:13:11.790 fused_ordering(127) 00:13:11.790 fused_ordering(128) 00:13:11.790 fused_ordering(129) 00:13:11.790 fused_ordering(130) 00:13:11.790 fused_ordering(131) 00:13:11.790 fused_ordering(132) 00:13:11.790 fused_ordering(133) 00:13:11.790 fused_ordering(134) 00:13:11.790 fused_ordering(135) 00:13:11.790 fused_ordering(136) 00:13:11.790 fused_ordering(137) 00:13:11.790 fused_ordering(138) 00:13:11.790 fused_ordering(139) 00:13:11.790 fused_ordering(140) 00:13:11.790 fused_ordering(141) 00:13:11.790 fused_ordering(142) 00:13:11.790 fused_ordering(143) 00:13:11.790 fused_ordering(144) 00:13:11.790 fused_ordering(145) 00:13:11.790 fused_ordering(146) 00:13:11.790 fused_ordering(147) 00:13:11.790 fused_ordering(148) 00:13:11.790 fused_ordering(149) 00:13:11.790 fused_ordering(150) 00:13:11.790 fused_ordering(151) 00:13:11.790 fused_ordering(152) 00:13:11.790 fused_ordering(153) 00:13:11.790 fused_ordering(154) 00:13:11.790 fused_ordering(155) 00:13:11.790 fused_ordering(156) 00:13:11.790 fused_ordering(157) 00:13:11.790 fused_ordering(158) 00:13:11.790 fused_ordering(159) 00:13:11.790 fused_ordering(160) 00:13:11.790 fused_ordering(161) 00:13:11.790 fused_ordering(162) 00:13:11.790 fused_ordering(163) 00:13:11.790 fused_ordering(164) 00:13:11.790 fused_ordering(165) 00:13:11.790 fused_ordering(166) 00:13:11.790 fused_ordering(167) 00:13:11.790 fused_ordering(168) 00:13:11.790 fused_ordering(169) 00:13:11.790 fused_ordering(170) 00:13:11.790 fused_ordering(171) 00:13:11.790 fused_ordering(172) 00:13:11.790 fused_ordering(173) 00:13:11.790 fused_ordering(174) 00:13:11.790 fused_ordering(175) 00:13:11.790 fused_ordering(176) 00:13:11.790 fused_ordering(177) 00:13:11.790 fused_ordering(178) 00:13:11.790 fused_ordering(179) 00:13:11.790 fused_ordering(180) 00:13:11.790 fused_ordering(181) 00:13:11.790 fused_ordering(182) 00:13:11.790 fused_ordering(183) 00:13:11.790 fused_ordering(184) 00:13:11.790 fused_ordering(185) 00:13:11.790 fused_ordering(186) 00:13:11.790 fused_ordering(187) 00:13:11.790 fused_ordering(188) 00:13:11.790 fused_ordering(189) 00:13:11.790 fused_ordering(190) 00:13:11.790 fused_ordering(191) 00:13:11.790 fused_ordering(192) 00:13:11.790 fused_ordering(193) 00:13:11.790 fused_ordering(194) 00:13:11.790 fused_ordering(195) 00:13:11.790 fused_ordering(196) 00:13:11.790 fused_ordering(197) 00:13:11.790 fused_ordering(198) 00:13:11.790 fused_ordering(199) 00:13:11.790 fused_ordering(200) 00:13:11.790 fused_ordering(201) 00:13:11.790 fused_ordering(202) 00:13:11.790 fused_ordering(203) 00:13:11.790 fused_ordering(204) 00:13:11.790 fused_ordering(205) 00:13:12.049 fused_ordering(206) 00:13:12.049 fused_ordering(207) 00:13:12.049 fused_ordering(208) 00:13:12.049 fused_ordering(209) 00:13:12.049 fused_ordering(210) 00:13:12.049 fused_ordering(211) 00:13:12.049 fused_ordering(212) 00:13:12.049 fused_ordering(213) 00:13:12.049 fused_ordering(214) 00:13:12.049 fused_ordering(215) 00:13:12.049 fused_ordering(216) 00:13:12.049 fused_ordering(217) 00:13:12.049 fused_ordering(218) 00:13:12.049 fused_ordering(219) 00:13:12.049 fused_ordering(220) 00:13:12.049 fused_ordering(221) 00:13:12.049 fused_ordering(222) 00:13:12.049 fused_ordering(223) 00:13:12.049 fused_ordering(224) 00:13:12.049 fused_ordering(225) 00:13:12.049 fused_ordering(226) 00:13:12.049 fused_ordering(227) 00:13:12.049 fused_ordering(228) 00:13:12.049 fused_ordering(229) 00:13:12.049 fused_ordering(230) 00:13:12.049 fused_ordering(231) 00:13:12.049 fused_ordering(232) 00:13:12.049 fused_ordering(233) 00:13:12.049 fused_ordering(234) 00:13:12.049 fused_ordering(235) 00:13:12.049 fused_ordering(236) 00:13:12.049 fused_ordering(237) 00:13:12.049 fused_ordering(238) 00:13:12.049 fused_ordering(239) 00:13:12.049 fused_ordering(240) 00:13:12.049 fused_ordering(241) 00:13:12.049 fused_ordering(242) 00:13:12.049 fused_ordering(243) 00:13:12.049 fused_ordering(244) 00:13:12.049 fused_ordering(245) 00:13:12.049 fused_ordering(246) 00:13:12.049 fused_ordering(247) 00:13:12.049 fused_ordering(248) 00:13:12.049 fused_ordering(249) 00:13:12.049 fused_ordering(250) 00:13:12.049 fused_ordering(251) 00:13:12.049 fused_ordering(252) 00:13:12.050 fused_ordering(253) 00:13:12.050 fused_ordering(254) 00:13:12.050 fused_ordering(255) 00:13:12.050 fused_ordering(256) 00:13:12.050 fused_ordering(257) 00:13:12.050 fused_ordering(258) 00:13:12.050 fused_ordering(259) 00:13:12.050 fused_ordering(260) 00:13:12.050 fused_ordering(261) 00:13:12.050 fused_ordering(262) 00:13:12.050 fused_ordering(263) 00:13:12.050 fused_ordering(264) 00:13:12.050 fused_ordering(265) 00:13:12.050 fused_ordering(266) 00:13:12.050 fused_ordering(267) 00:13:12.050 fused_ordering(268) 00:13:12.050 fused_ordering(269) 00:13:12.050 fused_ordering(270) 00:13:12.050 fused_ordering(271) 00:13:12.050 fused_ordering(272) 00:13:12.050 fused_ordering(273) 00:13:12.050 fused_ordering(274) 00:13:12.050 fused_ordering(275) 00:13:12.050 fused_ordering(276) 00:13:12.050 fused_ordering(277) 00:13:12.050 fused_ordering(278) 00:13:12.050 fused_ordering(279) 00:13:12.050 fused_ordering(280) 00:13:12.050 fused_ordering(281) 00:13:12.050 fused_ordering(282) 00:13:12.050 fused_ordering(283) 00:13:12.050 fused_ordering(284) 00:13:12.050 fused_ordering(285) 00:13:12.050 fused_ordering(286) 00:13:12.050 fused_ordering(287) 00:13:12.050 fused_ordering(288) 00:13:12.050 fused_ordering(289) 00:13:12.050 fused_ordering(290) 00:13:12.050 fused_ordering(291) 00:13:12.050 fused_ordering(292) 00:13:12.050 fused_ordering(293) 00:13:12.050 fused_ordering(294) 00:13:12.050 fused_ordering(295) 00:13:12.050 fused_ordering(296) 00:13:12.050 fused_ordering(297) 00:13:12.050 fused_ordering(298) 00:13:12.050 fused_ordering(299) 00:13:12.050 fused_ordering(300) 00:13:12.050 fused_ordering(301) 00:13:12.050 fused_ordering(302) 00:13:12.050 fused_ordering(303) 00:13:12.050 fused_ordering(304) 00:13:12.050 fused_ordering(305) 00:13:12.050 fused_ordering(306) 00:13:12.050 fused_ordering(307) 00:13:12.050 fused_ordering(308) 00:13:12.050 fused_ordering(309) 00:13:12.050 fused_ordering(310) 00:13:12.050 fused_ordering(311) 00:13:12.050 fused_ordering(312) 00:13:12.050 fused_ordering(313) 00:13:12.050 fused_ordering(314) 00:13:12.050 fused_ordering(315) 00:13:12.050 fused_ordering(316) 00:13:12.050 fused_ordering(317) 00:13:12.050 fused_ordering(318) 00:13:12.050 fused_ordering(319) 00:13:12.050 fused_ordering(320) 00:13:12.050 fused_ordering(321) 00:13:12.050 fused_ordering(322) 00:13:12.050 fused_ordering(323) 00:13:12.050 fused_ordering(324) 00:13:12.050 fused_ordering(325) 00:13:12.050 fused_ordering(326) 00:13:12.050 fused_ordering(327) 00:13:12.050 fused_ordering(328) 00:13:12.050 fused_ordering(329) 00:13:12.050 fused_ordering(330) 00:13:12.050 fused_ordering(331) 00:13:12.050 fused_ordering(332) 00:13:12.050 fused_ordering(333) 00:13:12.050 fused_ordering(334) 00:13:12.050 fused_ordering(335) 00:13:12.050 fused_ordering(336) 00:13:12.050 fused_ordering(337) 00:13:12.050 fused_ordering(338) 00:13:12.050 fused_ordering(339) 00:13:12.050 fused_ordering(340) 00:13:12.050 fused_ordering(341) 00:13:12.050 fused_ordering(342) 00:13:12.050 fused_ordering(343) 00:13:12.050 fused_ordering(344) 00:13:12.050 fused_ordering(345) 00:13:12.050 fused_ordering(346) 00:13:12.050 fused_ordering(347) 00:13:12.050 fused_ordering(348) 00:13:12.050 fused_ordering(349) 00:13:12.050 fused_ordering(350) 00:13:12.050 fused_ordering(351) 00:13:12.050 fused_ordering(352) 00:13:12.050 fused_ordering(353) 00:13:12.050 fused_ordering(354) 00:13:12.050 fused_ordering(355) 00:13:12.050 fused_ordering(356) 00:13:12.050 fused_ordering(357) 00:13:12.050 fused_ordering(358) 00:13:12.050 fused_ordering(359) 00:13:12.050 fused_ordering(360) 00:13:12.050 fused_ordering(361) 00:13:12.050 fused_ordering(362) 00:13:12.050 fused_ordering(363) 00:13:12.050 fused_ordering(364) 00:13:12.050 fused_ordering(365) 00:13:12.050 fused_ordering(366) 00:13:12.050 fused_ordering(367) 00:13:12.050 fused_ordering(368) 00:13:12.050 fused_ordering(369) 00:13:12.050 fused_ordering(370) 00:13:12.050 fused_ordering(371) 00:13:12.050 fused_ordering(372) 00:13:12.050 fused_ordering(373) 00:13:12.050 fused_ordering(374) 00:13:12.050 fused_ordering(375) 00:13:12.050 fused_ordering(376) 00:13:12.050 fused_ordering(377) 00:13:12.050 fused_ordering(378) 00:13:12.050 fused_ordering(379) 00:13:12.050 fused_ordering(380) 00:13:12.050 fused_ordering(381) 00:13:12.050 fused_ordering(382) 00:13:12.050 fused_ordering(383) 00:13:12.050 fused_ordering(384) 00:13:12.050 fused_ordering(385) 00:13:12.050 fused_ordering(386) 00:13:12.050 fused_ordering(387) 00:13:12.050 fused_ordering(388) 00:13:12.050 fused_ordering(389) 00:13:12.050 fused_ordering(390) 00:13:12.050 fused_ordering(391) 00:13:12.050 fused_ordering(392) 00:13:12.050 fused_ordering(393) 00:13:12.050 fused_ordering(394) 00:13:12.050 fused_ordering(395) 00:13:12.050 fused_ordering(396) 00:13:12.050 fused_ordering(397) 00:13:12.050 fused_ordering(398) 00:13:12.050 fused_ordering(399) 00:13:12.050 fused_ordering(400) 00:13:12.050 fused_ordering(401) 00:13:12.050 fused_ordering(402) 00:13:12.050 fused_ordering(403) 00:13:12.050 fused_ordering(404) 00:13:12.050 fused_ordering(405) 00:13:12.050 fused_ordering(406) 00:13:12.050 fused_ordering(407) 00:13:12.050 fused_ordering(408) 00:13:12.050 fused_ordering(409) 00:13:12.050 fused_ordering(410) 00:13:12.616 fused_ordering(411) 00:13:12.616 fused_ordering(412) 00:13:12.616 fused_ordering(413) 00:13:12.616 fused_ordering(414) 00:13:12.616 fused_ordering(415) 00:13:12.616 fused_ordering(416) 00:13:12.616 fused_ordering(417) 00:13:12.616 fused_ordering(418) 00:13:12.616 fused_ordering(419) 00:13:12.616 fused_ordering(420) 00:13:12.616 fused_ordering(421) 00:13:12.616 fused_ordering(422) 00:13:12.616 fused_ordering(423) 00:13:12.616 fused_ordering(424) 00:13:12.616 fused_ordering(425) 00:13:12.616 fused_ordering(426) 00:13:12.616 fused_ordering(427) 00:13:12.616 fused_ordering(428) 00:13:12.616 fused_ordering(429) 00:13:12.616 fused_ordering(430) 00:13:12.616 fused_ordering(431) 00:13:12.616 fused_ordering(432) 00:13:12.616 fused_ordering(433) 00:13:12.616 fused_ordering(434) 00:13:12.616 fused_ordering(435) 00:13:12.616 fused_ordering(436) 00:13:12.616 fused_ordering(437) 00:13:12.616 fused_ordering(438) 00:13:12.616 fused_ordering(439) 00:13:12.616 fused_ordering(440) 00:13:12.616 fused_ordering(441) 00:13:12.616 fused_ordering(442) 00:13:12.616 fused_ordering(443) 00:13:12.616 fused_ordering(444) 00:13:12.616 fused_ordering(445) 00:13:12.616 fused_ordering(446) 00:13:12.616 fused_ordering(447) 00:13:12.616 fused_ordering(448) 00:13:12.616 fused_ordering(449) 00:13:12.616 fused_ordering(450) 00:13:12.616 fused_ordering(451) 00:13:12.616 fused_ordering(452) 00:13:12.616 fused_ordering(453) 00:13:12.616 fused_ordering(454) 00:13:12.616 fused_ordering(455) 00:13:12.616 fused_ordering(456) 00:13:12.617 fused_ordering(457) 00:13:12.617 fused_ordering(458) 00:13:12.617 fused_ordering(459) 00:13:12.617 fused_ordering(460) 00:13:12.617 fused_ordering(461) 00:13:12.617 fused_ordering(462) 00:13:12.617 fused_ordering(463) 00:13:12.617 fused_ordering(464) 00:13:12.617 fused_ordering(465) 00:13:12.617 fused_ordering(466) 00:13:12.617 fused_ordering(467) 00:13:12.617 fused_ordering(468) 00:13:12.617 fused_ordering(469) 00:13:12.617 fused_ordering(470) 00:13:12.617 fused_ordering(471) 00:13:12.617 fused_ordering(472) 00:13:12.617 fused_ordering(473) 00:13:12.617 fused_ordering(474) 00:13:12.617 fused_ordering(475) 00:13:12.617 fused_ordering(476) 00:13:12.617 fused_ordering(477) 00:13:12.617 fused_ordering(478) 00:13:12.617 fused_ordering(479) 00:13:12.617 fused_ordering(480) 00:13:12.617 fused_ordering(481) 00:13:12.617 fused_ordering(482) 00:13:12.617 fused_ordering(483) 00:13:12.617 fused_ordering(484) 00:13:12.617 fused_ordering(485) 00:13:12.617 fused_ordering(486) 00:13:12.617 fused_ordering(487) 00:13:12.617 fused_ordering(488) 00:13:12.617 fused_ordering(489) 00:13:12.617 fused_ordering(490) 00:13:12.617 fused_ordering(491) 00:13:12.617 fused_ordering(492) 00:13:12.617 fused_ordering(493) 00:13:12.617 fused_ordering(494) 00:13:12.617 fused_ordering(495) 00:13:12.617 fused_ordering(496) 00:13:12.617 fused_ordering(497) 00:13:12.617 fused_ordering(498) 00:13:12.617 fused_ordering(499) 00:13:12.617 fused_ordering(500) 00:13:12.617 fused_ordering(501) 00:13:12.617 fused_ordering(502) 00:13:12.617 fused_ordering(503) 00:13:12.617 fused_ordering(504) 00:13:12.617 fused_ordering(505) 00:13:12.617 fused_ordering(506) 00:13:12.617 fused_ordering(507) 00:13:12.617 fused_ordering(508) 00:13:12.617 fused_ordering(509) 00:13:12.617 fused_ordering(510) 00:13:12.617 fused_ordering(511) 00:13:12.617 fused_ordering(512) 00:13:12.617 fused_ordering(513) 00:13:12.617 fused_ordering(514) 00:13:12.617 fused_ordering(515) 00:13:12.617 fused_ordering(516) 00:13:12.617 fused_ordering(517) 00:13:12.617 fused_ordering(518) 00:13:12.617 fused_ordering(519) 00:13:12.617 fused_ordering(520) 00:13:12.617 fused_ordering(521) 00:13:12.617 fused_ordering(522) 00:13:12.617 fused_ordering(523) 00:13:12.617 fused_ordering(524) 00:13:12.617 fused_ordering(525) 00:13:12.617 fused_ordering(526) 00:13:12.617 fused_ordering(527) 00:13:12.617 fused_ordering(528) 00:13:12.617 fused_ordering(529) 00:13:12.617 fused_ordering(530) 00:13:12.617 fused_ordering(531) 00:13:12.617 fused_ordering(532) 00:13:12.617 fused_ordering(533) 00:13:12.617 fused_ordering(534) 00:13:12.617 fused_ordering(535) 00:13:12.617 fused_ordering(536) 00:13:12.617 fused_ordering(537) 00:13:12.617 fused_ordering(538) 00:13:12.617 fused_ordering(539) 00:13:12.617 fused_ordering(540) 00:13:12.617 fused_ordering(541) 00:13:12.617 fused_ordering(542) 00:13:12.617 fused_ordering(543) 00:13:12.617 fused_ordering(544) 00:13:12.617 fused_ordering(545) 00:13:12.617 fused_ordering(546) 00:13:12.617 fused_ordering(547) 00:13:12.617 fused_ordering(548) 00:13:12.617 fused_ordering(549) 00:13:12.617 fused_ordering(550) 00:13:12.617 fused_ordering(551) 00:13:12.617 fused_ordering(552) 00:13:12.617 fused_ordering(553) 00:13:12.617 fused_ordering(554) 00:13:12.617 fused_ordering(555) 00:13:12.617 fused_ordering(556) 00:13:12.617 fused_ordering(557) 00:13:12.617 fused_ordering(558) 00:13:12.617 fused_ordering(559) 00:13:12.617 fused_ordering(560) 00:13:12.617 fused_ordering(561) 00:13:12.617 fused_ordering(562) 00:13:12.617 fused_ordering(563) 00:13:12.617 fused_ordering(564) 00:13:12.617 fused_ordering(565) 00:13:12.617 fused_ordering(566) 00:13:12.617 fused_ordering(567) 00:13:12.617 fused_ordering(568) 00:13:12.617 fused_ordering(569) 00:13:12.617 fused_ordering(570) 00:13:12.617 fused_ordering(571) 00:13:12.617 fused_ordering(572) 00:13:12.617 fused_ordering(573) 00:13:12.617 fused_ordering(574) 00:13:12.617 fused_ordering(575) 00:13:12.617 fused_ordering(576) 00:13:12.617 fused_ordering(577) 00:13:12.617 fused_ordering(578) 00:13:12.617 fused_ordering(579) 00:13:12.617 fused_ordering(580) 00:13:12.617 fused_ordering(581) 00:13:12.617 fused_ordering(582) 00:13:12.617 fused_ordering(583) 00:13:12.617 fused_ordering(584) 00:13:12.617 fused_ordering(585) 00:13:12.617 fused_ordering(586) 00:13:12.617 fused_ordering(587) 00:13:12.617 fused_ordering(588) 00:13:12.617 fused_ordering(589) 00:13:12.617 fused_ordering(590) 00:13:12.617 fused_ordering(591) 00:13:12.617 fused_ordering(592) 00:13:12.617 fused_ordering(593) 00:13:12.617 fused_ordering(594) 00:13:12.617 fused_ordering(595) 00:13:12.617 fused_ordering(596) 00:13:12.617 fused_ordering(597) 00:13:12.617 fused_ordering(598) 00:13:12.617 fused_ordering(599) 00:13:12.617 fused_ordering(600) 00:13:12.617 fused_ordering(601) 00:13:12.617 fused_ordering(602) 00:13:12.617 fused_ordering(603) 00:13:12.617 fused_ordering(604) 00:13:12.617 fused_ordering(605) 00:13:12.617 fused_ordering(606) 00:13:12.617 fused_ordering(607) 00:13:12.617 fused_ordering(608) 00:13:12.617 fused_ordering(609) 00:13:12.617 fused_ordering(610) 00:13:12.617 fused_ordering(611) 00:13:12.617 fused_ordering(612) 00:13:12.617 fused_ordering(613) 00:13:12.617 fused_ordering(614) 00:13:12.617 fused_ordering(615) 00:13:12.876 fused_ordering(616) 00:13:12.876 fused_ordering(617) 00:13:12.876 fused_ordering(618) 00:13:12.876 fused_ordering(619) 00:13:12.876 fused_ordering(620) 00:13:12.876 fused_ordering(621) 00:13:12.876 fused_ordering(622) 00:13:12.876 fused_ordering(623) 00:13:12.876 fused_ordering(624) 00:13:12.876 fused_ordering(625) 00:13:12.876 fused_ordering(626) 00:13:12.876 fused_ordering(627) 00:13:12.876 fused_ordering(628) 00:13:12.876 fused_ordering(629) 00:13:12.876 fused_ordering(630) 00:13:12.876 fused_ordering(631) 00:13:12.876 fused_ordering(632) 00:13:12.876 fused_ordering(633) 00:13:12.876 fused_ordering(634) 00:13:12.876 fused_ordering(635) 00:13:12.876 fused_ordering(636) 00:13:12.876 fused_ordering(637) 00:13:12.876 fused_ordering(638) 00:13:12.876 fused_ordering(639) 00:13:12.876 fused_ordering(640) 00:13:12.876 fused_ordering(641) 00:13:12.876 fused_ordering(642) 00:13:12.876 fused_ordering(643) 00:13:12.876 fused_ordering(644) 00:13:12.876 fused_ordering(645) 00:13:12.876 fused_ordering(646) 00:13:12.876 fused_ordering(647) 00:13:12.876 fused_ordering(648) 00:13:12.876 fused_ordering(649) 00:13:12.876 fused_ordering(650) 00:13:12.876 fused_ordering(651) 00:13:12.876 fused_ordering(652) 00:13:12.876 fused_ordering(653) 00:13:12.876 fused_ordering(654) 00:13:12.876 fused_ordering(655) 00:13:12.876 fused_ordering(656) 00:13:12.876 fused_ordering(657) 00:13:12.876 fused_ordering(658) 00:13:12.876 fused_ordering(659) 00:13:12.876 fused_ordering(660) 00:13:12.876 fused_ordering(661) 00:13:12.876 fused_ordering(662) 00:13:12.876 fused_ordering(663) 00:13:12.876 fused_ordering(664) 00:13:12.876 fused_ordering(665) 00:13:12.876 fused_ordering(666) 00:13:12.876 fused_ordering(667) 00:13:12.876 fused_ordering(668) 00:13:12.876 fused_ordering(669) 00:13:12.876 fused_ordering(670) 00:13:12.876 fused_ordering(671) 00:13:12.876 fused_ordering(672) 00:13:12.876 fused_ordering(673) 00:13:12.876 fused_ordering(674) 00:13:12.876 fused_ordering(675) 00:13:12.876 fused_ordering(676) 00:13:12.876 fused_ordering(677) 00:13:12.876 fused_ordering(678) 00:13:12.876 fused_ordering(679) 00:13:12.876 fused_ordering(680) 00:13:12.876 fused_ordering(681) 00:13:12.876 fused_ordering(682) 00:13:12.876 fused_ordering(683) 00:13:12.876 fused_ordering(684) 00:13:12.876 fused_ordering(685) 00:13:12.876 fused_ordering(686) 00:13:12.876 fused_ordering(687) 00:13:12.876 fused_ordering(688) 00:13:12.876 fused_ordering(689) 00:13:12.876 fused_ordering(690) 00:13:12.876 fused_ordering(691) 00:13:12.876 fused_ordering(692) 00:13:12.876 fused_ordering(693) 00:13:12.876 fused_ordering(694) 00:13:12.876 fused_ordering(695) 00:13:12.876 fused_ordering(696) 00:13:12.876 fused_ordering(697) 00:13:12.876 fused_ordering(698) 00:13:12.876 fused_ordering(699) 00:13:12.876 fused_ordering(700) 00:13:12.876 fused_ordering(701) 00:13:12.876 fused_ordering(702) 00:13:12.876 fused_ordering(703) 00:13:12.876 fused_ordering(704) 00:13:12.876 fused_ordering(705) 00:13:12.876 fused_ordering(706) 00:13:12.876 fused_ordering(707) 00:13:12.876 fused_ordering(708) 00:13:12.876 fused_ordering(709) 00:13:12.876 fused_ordering(710) 00:13:12.876 fused_ordering(711) 00:13:12.876 fused_ordering(712) 00:13:12.876 fused_ordering(713) 00:13:12.876 fused_ordering(714) 00:13:12.876 fused_ordering(715) 00:13:12.876 fused_ordering(716) 00:13:12.876 fused_ordering(717) 00:13:12.876 fused_ordering(718) 00:13:12.876 fused_ordering(719) 00:13:12.876 fused_ordering(720) 00:13:12.876 fused_ordering(721) 00:13:12.876 fused_ordering(722) 00:13:12.876 fused_ordering(723) 00:13:12.876 fused_ordering(724) 00:13:12.876 fused_ordering(725) 00:13:12.876 fused_ordering(726) 00:13:12.876 fused_ordering(727) 00:13:12.876 fused_ordering(728) 00:13:12.876 fused_ordering(729) 00:13:12.876 fused_ordering(730) 00:13:12.876 fused_ordering(731) 00:13:12.876 fused_ordering(732) 00:13:12.876 fused_ordering(733) 00:13:12.876 fused_ordering(734) 00:13:12.876 fused_ordering(735) 00:13:12.876 fused_ordering(736) 00:13:12.876 fused_ordering(737) 00:13:12.876 fused_ordering(738) 00:13:12.876 fused_ordering(739) 00:13:12.876 fused_ordering(740) 00:13:12.876 fused_ordering(741) 00:13:12.876 fused_ordering(742) 00:13:12.876 fused_ordering(743) 00:13:12.876 fused_ordering(744) 00:13:12.876 fused_ordering(745) 00:13:12.876 fused_ordering(746) 00:13:12.876 fused_ordering(747) 00:13:12.876 fused_ordering(748) 00:13:12.876 fused_ordering(749) 00:13:12.876 fused_ordering(750) 00:13:12.876 fused_ordering(751) 00:13:12.876 fused_ordering(752) 00:13:12.876 fused_ordering(753) 00:13:12.876 fused_ordering(754) 00:13:12.876 fused_ordering(755) 00:13:12.876 fused_ordering(756) 00:13:12.876 fused_ordering(757) 00:13:12.876 fused_ordering(758) 00:13:12.876 fused_ordering(759) 00:13:12.876 fused_ordering(760) 00:13:12.876 fused_ordering(761) 00:13:12.876 fused_ordering(762) 00:13:12.876 fused_ordering(763) 00:13:12.876 fused_ordering(764) 00:13:12.876 fused_ordering(765) 00:13:12.876 fused_ordering(766) 00:13:12.876 fused_ordering(767) 00:13:12.876 fused_ordering(768) 00:13:12.876 fused_ordering(769) 00:13:12.876 fused_ordering(770) 00:13:12.876 fused_ordering(771) 00:13:12.876 fused_ordering(772) 00:13:12.876 fused_ordering(773) 00:13:12.876 fused_ordering(774) 00:13:12.876 fused_ordering(775) 00:13:12.876 fused_ordering(776) 00:13:12.876 fused_ordering(777) 00:13:12.876 fused_ordering(778) 00:13:12.876 fused_ordering(779) 00:13:12.876 fused_ordering(780) 00:13:12.876 fused_ordering(781) 00:13:12.876 fused_ordering(782) 00:13:12.876 fused_ordering(783) 00:13:12.876 fused_ordering(784) 00:13:12.876 fused_ordering(785) 00:13:12.876 fused_ordering(786) 00:13:12.876 fused_ordering(787) 00:13:12.876 fused_ordering(788) 00:13:12.876 fused_ordering(789) 00:13:12.876 fused_ordering(790) 00:13:12.876 fused_ordering(791) 00:13:12.876 fused_ordering(792) 00:13:12.876 fused_ordering(793) 00:13:12.876 fused_ordering(794) 00:13:12.876 fused_ordering(795) 00:13:12.876 fused_ordering(796) 00:13:12.876 fused_ordering(797) 00:13:12.876 fused_ordering(798) 00:13:12.876 fused_ordering(799) 00:13:12.876 fused_ordering(800) 00:13:12.876 fused_ordering(801) 00:13:12.876 fused_ordering(802) 00:13:12.876 fused_ordering(803) 00:13:12.876 fused_ordering(804) 00:13:12.876 fused_ordering(805) 00:13:12.876 fused_ordering(806) 00:13:12.876 fused_ordering(807) 00:13:12.876 fused_ordering(808) 00:13:12.876 fused_ordering(809) 00:13:12.876 fused_ordering(810) 00:13:12.876 fused_ordering(811) 00:13:12.876 fused_ordering(812) 00:13:12.876 fused_ordering(813) 00:13:12.876 fused_ordering(814) 00:13:12.876 fused_ordering(815) 00:13:12.876 fused_ordering(816) 00:13:12.876 fused_ordering(817) 00:13:12.876 fused_ordering(818) 00:13:12.876 fused_ordering(819) 00:13:12.876 fused_ordering(820) 00:13:13.444 fused_ordering(821) 00:13:13.444 fused_ordering(822) 00:13:13.444 fused_ordering(823) 00:13:13.444 fused_ordering(824) 00:13:13.444 fused_ordering(825) 00:13:13.444 fused_ordering(826) 00:13:13.444 fused_ordering(827) 00:13:13.444 fused_ordering(828) 00:13:13.444 fused_ordering(829) 00:13:13.444 fused_ordering(830) 00:13:13.444 fused_ordering(831) 00:13:13.444 fused_ordering(832) 00:13:13.444 fused_ordering(833) 00:13:13.444 fused_ordering(834) 00:13:13.444 fused_ordering(835) 00:13:13.444 fused_ordering(836) 00:13:13.444 fused_ordering(837) 00:13:13.445 fused_ordering(838) 00:13:13.445 fused_ordering(839) 00:13:13.445 fused_ordering(840) 00:13:13.445 fused_ordering(841) 00:13:13.445 fused_ordering(842) 00:13:13.445 fused_ordering(843) 00:13:13.445 fused_ordering(844) 00:13:13.445 fused_ordering(845) 00:13:13.445 fused_ordering(846) 00:13:13.445 fused_ordering(847) 00:13:13.445 fused_ordering(848) 00:13:13.445 fused_ordering(849) 00:13:13.445 fused_ordering(850) 00:13:13.445 fused_ordering(851) 00:13:13.445 fused_ordering(852) 00:13:13.445 fused_ordering(853) 00:13:13.445 fused_ordering(854) 00:13:13.445 fused_ordering(855) 00:13:13.445 fused_ordering(856) 00:13:13.445 fused_ordering(857) 00:13:13.445 fused_ordering(858) 00:13:13.445 fused_ordering(859) 00:13:13.445 fused_ordering(860) 00:13:13.445 fused_ordering(861) 00:13:13.445 fused_ordering(862) 00:13:13.445 fused_ordering(863) 00:13:13.445 fused_ordering(864) 00:13:13.445 fused_ordering(865) 00:13:13.445 fused_ordering(866) 00:13:13.445 fused_ordering(867) 00:13:13.445 fused_ordering(868) 00:13:13.445 fused_ordering(869) 00:13:13.445 fused_ordering(870) 00:13:13.445 fused_ordering(871) 00:13:13.445 fused_ordering(872) 00:13:13.445 fused_ordering(873) 00:13:13.445 fused_ordering(874) 00:13:13.445 fused_ordering(875) 00:13:13.445 fused_ordering(876) 00:13:13.445 fused_ordering(877) 00:13:13.445 fused_ordering(878) 00:13:13.445 fused_ordering(879) 00:13:13.445 fused_ordering(880) 00:13:13.445 fused_ordering(881) 00:13:13.445 fused_ordering(882) 00:13:13.445 fused_ordering(883) 00:13:13.445 fused_ordering(884) 00:13:13.445 fused_ordering(885) 00:13:13.445 fused_ordering(886) 00:13:13.445 fused_ordering(887) 00:13:13.445 fused_ordering(888) 00:13:13.445 fused_ordering(889) 00:13:13.445 fused_ordering(890) 00:13:13.445 fused_ordering(891) 00:13:13.445 fused_ordering(892) 00:13:13.445 fused_ordering(893) 00:13:13.445 fused_ordering(894) 00:13:13.445 fused_ordering(895) 00:13:13.445 fused_ordering(896) 00:13:13.445 fused_ordering(897) 00:13:13.445 fused_ordering(898) 00:13:13.445 fused_ordering(899) 00:13:13.445 fused_ordering(900) 00:13:13.445 fused_ordering(901) 00:13:13.445 fused_ordering(902) 00:13:13.445 fused_ordering(903) 00:13:13.445 fused_ordering(904) 00:13:13.445 fused_ordering(905) 00:13:13.445 fused_ordering(906) 00:13:13.445 fused_ordering(907) 00:13:13.445 fused_ordering(908) 00:13:13.445 fused_ordering(909) 00:13:13.445 fused_ordering(910) 00:13:13.445 fused_ordering(911) 00:13:13.445 fused_ordering(912) 00:13:13.445 fused_ordering(913) 00:13:13.445 fused_ordering(914) 00:13:13.445 fused_ordering(915) 00:13:13.445 fused_ordering(916) 00:13:13.445 fused_ordering(917) 00:13:13.445 fused_ordering(918) 00:13:13.445 fused_ordering(919) 00:13:13.445 fused_ordering(920) 00:13:13.445 fused_ordering(921) 00:13:13.445 fused_ordering(922) 00:13:13.445 fused_ordering(923) 00:13:13.445 fused_ordering(924) 00:13:13.445 fused_ordering(925) 00:13:13.445 fused_ordering(926) 00:13:13.445 fused_ordering(927) 00:13:13.445 fused_ordering(928) 00:13:13.445 fused_ordering(929) 00:13:13.445 fused_ordering(930) 00:13:13.445 fused_ordering(931) 00:13:13.445 fused_ordering(932) 00:13:13.445 fused_ordering(933) 00:13:13.445 fused_ordering(934) 00:13:13.445 fused_ordering(935) 00:13:13.445 fused_ordering(936) 00:13:13.445 fused_ordering(937) 00:13:13.445 fused_ordering(938) 00:13:13.445 fused_ordering(939) 00:13:13.445 fused_ordering(940) 00:13:13.445 fused_ordering(941) 00:13:13.445 fused_ordering(942) 00:13:13.445 fused_ordering(943) 00:13:13.445 fused_ordering(944) 00:13:13.445 fused_ordering(945) 00:13:13.445 fused_ordering(946) 00:13:13.445 fused_ordering(947) 00:13:13.445 fused_ordering(948) 00:13:13.445 fused_ordering(949) 00:13:13.445 fused_ordering(950) 00:13:13.445 fused_ordering(951) 00:13:13.445 fused_ordering(952) 00:13:13.445 fused_ordering(953) 00:13:13.445 fused_ordering(954) 00:13:13.445 fused_ordering(955) 00:13:13.445 fused_ordering(956) 00:13:13.445 fused_ordering(957) 00:13:13.445 fused_ordering(958) 00:13:13.445 fused_ordering(959) 00:13:13.445 fused_ordering(960) 00:13:13.445 fused_ordering(961) 00:13:13.445 fused_ordering(962) 00:13:13.445 fused_ordering(963) 00:13:13.445 fused_ordering(964) 00:13:13.445 fused_ordering(965) 00:13:13.445 fused_ordering(966) 00:13:13.445 fused_ordering(967) 00:13:13.445 fused_ordering(968) 00:13:13.445 fused_ordering(969) 00:13:13.445 fused_ordering(970) 00:13:13.445 fused_ordering(971) 00:13:13.445 fused_ordering(972) 00:13:13.445 fused_ordering(973) 00:13:13.445 fused_ordering(974) 00:13:13.445 fused_ordering(975) 00:13:13.445 fused_ordering(976) 00:13:13.445 fused_ordering(977) 00:13:13.445 fused_ordering(978) 00:13:13.445 fused_ordering(979) 00:13:13.445 fused_ordering(980) 00:13:13.445 fused_ordering(981) 00:13:13.445 fused_ordering(982) 00:13:13.445 fused_ordering(983) 00:13:13.445 fused_ordering(984) 00:13:13.445 fused_ordering(985) 00:13:13.445 fused_ordering(986) 00:13:13.445 fused_ordering(987) 00:13:13.445 fused_ordering(988) 00:13:13.445 fused_ordering(989) 00:13:13.445 fused_ordering(990) 00:13:13.445 fused_ordering(991) 00:13:13.445 fused_ordering(992) 00:13:13.445 fused_ordering(993) 00:13:13.445 fused_ordering(994) 00:13:13.445 fused_ordering(995) 00:13:13.445 fused_ordering(996) 00:13:13.445 fused_ordering(997) 00:13:13.445 fused_ordering(998) 00:13:13.445 fused_ordering(999) 00:13:13.445 fused_ordering(1000) 00:13:13.445 fused_ordering(1001) 00:13:13.445 fused_ordering(1002) 00:13:13.445 fused_ordering(1003) 00:13:13.445 fused_ordering(1004) 00:13:13.445 fused_ordering(1005) 00:13:13.445 fused_ordering(1006) 00:13:13.445 fused_ordering(1007) 00:13:13.445 fused_ordering(1008) 00:13:13.445 fused_ordering(1009) 00:13:13.445 fused_ordering(1010) 00:13:13.445 fused_ordering(1011) 00:13:13.445 fused_ordering(1012) 00:13:13.445 fused_ordering(1013) 00:13:13.445 fused_ordering(1014) 00:13:13.445 fused_ordering(1015) 00:13:13.445 fused_ordering(1016) 00:13:13.445 fused_ordering(1017) 00:13:13.445 fused_ordering(1018) 00:13:13.445 fused_ordering(1019) 00:13:13.445 fused_ordering(1020) 00:13:13.445 fused_ordering(1021) 00:13:13.445 fused_ordering(1022) 00:13:13.445 fused_ordering(1023) 00:13:13.445 03:56:48 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:13.445 03:56:48 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:13.445 03:56:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:13.445 03:56:48 -- nvmf/common.sh@116 -- # sync 00:13:13.445 03:56:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:13.445 03:56:48 -- nvmf/common.sh@119 -- # set +e 00:13:13.445 03:56:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:13.445 03:56:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:13.445 rmmod nvme_tcp 00:13:13.445 rmmod nvme_fabrics 00:13:13.445 rmmod nvme_keyring 00:13:13.445 03:56:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:13.445 03:56:48 -- nvmf/common.sh@123 -- # set -e 00:13:13.445 03:56:48 -- nvmf/common.sh@124 -- # return 0 00:13:13.445 03:56:48 -- nvmf/common.sh@477 -- # '[' -n 70307 ']' 00:13:13.445 03:56:48 -- nvmf/common.sh@478 -- # killprocess 70307 00:13:13.445 03:56:48 -- common/autotest_common.sh@936 -- # '[' -z 70307 ']' 00:13:13.445 03:56:48 -- common/autotest_common.sh@940 -- # kill -0 70307 00:13:13.445 03:56:48 -- common/autotest_common.sh@941 -- # uname 00:13:13.445 03:56:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:13.704 03:56:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70307 00:13:13.704 killing process with pid 70307 00:13:13.704 03:56:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:13.704 03:56:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:13.704 03:56:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70307' 00:13:13.704 03:56:48 -- common/autotest_common.sh@955 -- # kill 70307 00:13:13.704 03:56:48 -- common/autotest_common.sh@960 -- # wait 70307 00:13:13.962 03:56:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:13.962 03:56:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:13.962 03:56:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:13.962 03:56:48 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:13.962 03:56:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:13.962 03:56:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.962 03:56:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:13.962 03:56:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.962 03:56:48 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:13.962 00:13:13.962 real 0m4.031s 00:13:13.962 user 0m4.759s 00:13:13.962 sys 0m1.330s 00:13:13.962 03:56:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:13.962 03:56:48 -- common/autotest_common.sh@10 -- # set +x 00:13:13.962 ************************************ 00:13:13.962 END TEST nvmf_fused_ordering 00:13:13.962 ************************************ 00:13:13.962 03:56:48 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:13.962 03:56:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:13.962 03:56:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:13.962 03:56:48 -- common/autotest_common.sh@10 -- # set +x 00:13:13.962 ************************************ 00:13:13.962 START TEST nvmf_delete_subsystem 00:13:13.962 ************************************ 00:13:13.962 03:56:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:13.962 * Looking for test storage... 00:13:13.962 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:13.962 03:56:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:13.962 03:56:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:13.962 03:56:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:14.220 03:56:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:14.220 03:56:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:14.220 03:56:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:14.220 03:56:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:14.220 03:56:49 -- scripts/common.sh@335 -- # IFS=.-: 00:13:14.220 03:56:49 -- scripts/common.sh@335 -- # read -ra ver1 00:13:14.220 03:56:49 -- scripts/common.sh@336 -- # IFS=.-: 00:13:14.220 03:56:49 -- scripts/common.sh@336 -- # read -ra ver2 00:13:14.220 03:56:49 -- scripts/common.sh@337 -- # local 'op=<' 00:13:14.220 03:56:49 -- scripts/common.sh@339 -- # ver1_l=2 00:13:14.220 03:56:49 -- scripts/common.sh@340 -- # ver2_l=1 00:13:14.220 03:56:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:14.220 03:56:49 -- scripts/common.sh@343 -- # case "$op" in 00:13:14.220 03:56:49 -- scripts/common.sh@344 -- # : 1 00:13:14.220 03:56:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:14.220 03:56:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:14.220 03:56:49 -- scripts/common.sh@364 -- # decimal 1 00:13:14.220 03:56:49 -- scripts/common.sh@352 -- # local d=1 00:13:14.220 03:56:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:14.220 03:56:49 -- scripts/common.sh@354 -- # echo 1 00:13:14.220 03:56:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:14.220 03:56:49 -- scripts/common.sh@365 -- # decimal 2 00:13:14.220 03:56:49 -- scripts/common.sh@352 -- # local d=2 00:13:14.220 03:56:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:14.220 03:56:49 -- scripts/common.sh@354 -- # echo 2 00:13:14.220 03:56:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:14.220 03:56:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:14.220 03:56:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:14.220 03:56:49 -- scripts/common.sh@367 -- # return 0 00:13:14.220 03:56:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:14.220 03:56:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:14.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.220 --rc genhtml_branch_coverage=1 00:13:14.220 --rc genhtml_function_coverage=1 00:13:14.220 --rc genhtml_legend=1 00:13:14.220 --rc geninfo_all_blocks=1 00:13:14.220 --rc geninfo_unexecuted_blocks=1 00:13:14.220 00:13:14.220 ' 00:13:14.220 03:56:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:14.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.220 --rc genhtml_branch_coverage=1 00:13:14.220 --rc genhtml_function_coverage=1 00:13:14.220 --rc genhtml_legend=1 00:13:14.220 --rc geninfo_all_blocks=1 00:13:14.220 --rc geninfo_unexecuted_blocks=1 00:13:14.220 00:13:14.220 ' 00:13:14.220 03:56:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:14.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.220 --rc genhtml_branch_coverage=1 00:13:14.220 --rc genhtml_function_coverage=1 00:13:14.220 --rc genhtml_legend=1 00:13:14.220 --rc geninfo_all_blocks=1 00:13:14.220 --rc geninfo_unexecuted_blocks=1 00:13:14.220 00:13:14.220 ' 00:13:14.220 03:56:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:14.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.220 --rc genhtml_branch_coverage=1 00:13:14.220 --rc genhtml_function_coverage=1 00:13:14.220 --rc genhtml_legend=1 00:13:14.220 --rc geninfo_all_blocks=1 00:13:14.220 --rc geninfo_unexecuted_blocks=1 00:13:14.220 00:13:14.220 ' 00:13:14.220 03:56:49 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:14.220 03:56:49 -- nvmf/common.sh@7 -- # uname -s 00:13:14.220 03:56:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:14.220 03:56:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:14.220 03:56:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:14.220 03:56:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:14.221 03:56:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:14.221 03:56:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:14.221 03:56:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:14.221 03:56:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:14.221 03:56:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:14.221 03:56:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:14.221 03:56:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:13:14.221 03:56:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:13:14.221 03:56:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:14.221 03:56:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:14.221 03:56:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:14.221 03:56:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:14.221 03:56:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.221 03:56:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.221 03:56:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.221 03:56:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.221 03:56:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.221 03:56:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.221 03:56:49 -- paths/export.sh@5 -- # export PATH 00:13:14.221 03:56:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.221 03:56:49 -- nvmf/common.sh@46 -- # : 0 00:13:14.221 03:56:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:14.221 03:56:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:14.221 03:56:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:14.221 03:56:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:14.221 03:56:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:14.221 03:56:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:14.221 03:56:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:14.221 03:56:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:14.221 03:56:49 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:14.221 03:56:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:14.221 03:56:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:14.221 03:56:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:14.221 03:56:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:14.221 03:56:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:14.221 03:56:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.221 03:56:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:14.221 03:56:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.221 03:56:49 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:14.221 03:56:49 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:14.221 03:56:49 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:14.221 03:56:49 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:14.221 03:56:49 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:14.221 03:56:49 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:14.221 03:56:49 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:14.221 03:56:49 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:14.221 03:56:49 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:14.221 03:56:49 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:14.221 03:56:49 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:14.221 03:56:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:14.221 03:56:49 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:14.221 03:56:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:14.221 03:56:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:14.221 03:56:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:14.221 03:56:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:14.221 03:56:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:14.221 03:56:49 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:14.221 03:56:49 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:14.221 Cannot find device "nvmf_tgt_br" 00:13:14.221 03:56:49 -- nvmf/common.sh@154 -- # true 00:13:14.221 03:56:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:14.221 Cannot find device "nvmf_tgt_br2" 00:13:14.221 03:56:49 -- nvmf/common.sh@155 -- # true 00:13:14.221 03:56:49 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:14.221 03:56:49 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:14.221 Cannot find device "nvmf_tgt_br" 00:13:14.221 03:56:49 -- nvmf/common.sh@157 -- # true 00:13:14.221 03:56:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:14.221 Cannot find device "nvmf_tgt_br2" 00:13:14.221 03:56:49 -- nvmf/common.sh@158 -- # true 00:13:14.221 03:56:49 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:14.221 03:56:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:14.221 03:56:49 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:14.221 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:14.221 03:56:49 -- nvmf/common.sh@161 -- # true 00:13:14.221 03:56:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:14.221 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:14.221 03:56:49 -- nvmf/common.sh@162 -- # true 00:13:14.221 03:56:49 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:14.221 03:56:49 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:14.221 03:56:49 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:14.221 03:56:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:14.221 03:56:49 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:14.221 03:56:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:14.221 03:56:49 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:14.221 03:56:49 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:14.479 03:56:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:14.479 03:56:49 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:14.479 03:56:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:14.479 03:56:49 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:14.479 03:56:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:14.479 03:56:49 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:14.479 03:56:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:14.479 03:56:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:14.479 03:56:49 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:14.479 03:56:49 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:14.479 03:56:49 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:14.479 03:56:49 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:14.479 03:56:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:14.479 03:56:49 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:14.479 03:56:49 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:14.479 03:56:49 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:14.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:14.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:13:14.479 00:13:14.479 --- 10.0.0.2 ping statistics --- 00:13:14.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.479 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:13:14.479 03:56:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:14.479 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:14.479 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:13:14.479 00:13:14.479 --- 10.0.0.3 ping statistics --- 00:13:14.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.479 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:13:14.479 03:56:49 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:14.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:14.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:13:14.479 00:13:14.479 --- 10.0.0.1 ping statistics --- 00:13:14.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.479 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:13:14.479 03:56:49 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:14.479 03:56:49 -- nvmf/common.sh@421 -- # return 0 00:13:14.479 03:56:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:14.479 03:56:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:14.479 03:56:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:14.479 03:56:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:14.479 03:56:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:14.479 03:56:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:14.479 03:56:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:14.479 03:56:49 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:14.479 03:56:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:14.479 03:56:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:14.479 03:56:49 -- common/autotest_common.sh@10 -- # set +x 00:13:14.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.479 03:56:49 -- nvmf/common.sh@469 -- # nvmfpid=70568 00:13:14.479 03:56:49 -- nvmf/common.sh@470 -- # waitforlisten 70568 00:13:14.480 03:56:49 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:14.480 03:56:49 -- common/autotest_common.sh@829 -- # '[' -z 70568 ']' 00:13:14.480 03:56:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.480 03:56:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:14.480 03:56:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.480 03:56:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:14.480 03:56:49 -- common/autotest_common.sh@10 -- # set +x 00:13:14.480 [2024-11-08 03:56:49.544097] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:14.480 [2024-11-08 03:56:49.544212] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:14.736 [2024-11-08 03:56:49.685169] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:14.736 [2024-11-08 03:56:49.840099] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:14.736 [2024-11-08 03:56:49.840282] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:14.736 [2024-11-08 03:56:49.840301] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:14.736 [2024-11-08 03:56:49.840312] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:14.736 [2024-11-08 03:56:49.840487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.736 [2024-11-08 03:56:49.840495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.669 03:56:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:15.669 03:56:50 -- common/autotest_common.sh@862 -- # return 0 00:13:15.669 03:56:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:15.669 03:56:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:15.669 03:56:50 -- common/autotest_common.sh@10 -- # set +x 00:13:15.669 03:56:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.669 03:56:50 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:15.669 03:56:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.669 03:56:50 -- common/autotest_common.sh@10 -- # set +x 00:13:15.669 [2024-11-08 03:56:50.575025] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:15.669 03:56:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.669 03:56:50 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:15.669 03:56:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.669 03:56:50 -- common/autotest_common.sh@10 -- # set +x 00:13:15.669 03:56:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.669 03:56:50 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.669 03:56:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.669 03:56:50 -- common/autotest_common.sh@10 -- # set +x 00:13:15.669 [2024-11-08 03:56:50.597136] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.669 03:56:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.669 03:56:50 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:15.669 03:56:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.669 03:56:50 -- common/autotest_common.sh@10 -- # set +x 00:13:15.669 NULL1 00:13:15.669 03:56:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.669 03:56:50 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:15.669 03:56:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.669 03:56:50 -- common/autotest_common.sh@10 -- # set +x 00:13:15.669 Delay0 00:13:15.669 03:56:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.669 03:56:50 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:15.669 03:56:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.669 03:56:50 -- common/autotest_common.sh@10 -- # set +x 00:13:15.669 03:56:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.669 03:56:50 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:15.669 03:56:50 -- target/delete_subsystem.sh@28 -- # perf_pid=70619 00:13:15.669 03:56:50 -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:15.928 [2024-11-08 03:56:50.799370] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:17.828 03:56:52 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.828 03:56:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.828 03:56:52 -- common/autotest_common.sh@10 -- # set +x 00:13:17.828 Read completed with error (sct=0, sc=8) 00:13:17.828 starting I/O failed: -6 00:13:17.828 Read completed with error (sct=0, sc=8) 00:13:17.828 starting I/O failed: -6 00:13:17.828 Write completed with error (sct=0, sc=8) 00:13:17.828 Read completed with error (sct=0, sc=8) 00:13:17.828 Read completed with error (sct=0, sc=8) 00:13:17.828 Read completed with error (sct=0, sc=8) 00:13:17.828 Read completed with error (sct=0, sc=8) 00:13:17.828 Read completed with error (sct=0, sc=8) 00:13:17.828 starting I/O failed: -6 00:13:17.828 Read completed with error (sct=0, sc=8) 00:13:17.828 Write completed with error (sct=0, sc=8) 00:13:17.828 Read completed with error (sct=0, sc=8) 00:13:17.828 Write completed with error (sct=0, sc=8) 00:13:17.828 Read completed with error (sct=0, sc=8) 00:13:17.828 starting I/O failed: -6 00:13:17.828 Read completed with error (sct=0, sc=8) 00:13:17.828 Read completed with error (sct=0, sc=8) 00:13:17.828 starting I/O failed: -6 00:13:17.828 Write completed with error (sct=0, sc=8) 00:13:17.828 Read completed with error (sct=0, sc=8) 00:13:17.828 Read completed with error (sct=0, sc=8) 00:13:17.828 Write completed with error (sct=0, sc=8) 00:13:17.828 Write completed with error (sct=0, sc=8) 00:13:17.828 Read completed with error (sct=0, sc=8) 00:13:17.828 starting I/O failed: -6 00:13:17.828 Read completed with error (sct=0, sc=8) 00:13:17.828 Write completed with error (sct=0, sc=8) 00:13:17.828 starting I/O failed: -6 00:13:17.828 Write completed with error (sct=0, sc=8) 00:13:17.828 Read completed with error (sct=0, sc=8) 00:13:17.828 Write completed with error (sct=0, sc=8) 00:13:17.828 Read completed with error (sct=0, sc=8) 00:13:17.828 Write completed with error (sct=0, sc=8) 00:13:17.828 Read completed with error (sct=0, sc=8) 00:13:17.828 Read completed with error (sct=0, sc=8) 00:13:17.828 starting I/O failed: -6 00:13:17.828 starting I/O failed: -6 00:13:17.828 Write completed with error (sct=0, sc=8) 00:13:17.828 Read completed with error (sct=0, sc=8) 00:13:17.828 Write completed with error (sct=0, sc=8) 00:13:17.828 Read completed with error (sct=0, sc=8) 00:13:17.828 Read completed with error (sct=0, sc=8) 00:13:17.828 Read completed with error (sct=0, sc=8) 00:13:17.828 Read completed with error (sct=0, sc=8) 00:13:17.828 starting I/O failed: -6 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 starting I/O failed: -6 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 starting I/O failed: -6 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 starting I/O failed: -6 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 starting I/O failed: -6 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 starting I/O failed: -6 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 starting I/O failed: -6 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 starting I/O failed: -6 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 starting I/O failed: -6 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 starting I/O failed: -6 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 starting I/O failed: -6 00:13:17.829 starting I/O failed: -6 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 starting I/O failed: -6 00:13:17.829 starting I/O failed: -6 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 starting I/O failed: -6 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 starting I/O failed: -6 00:13:17.829 [2024-11-08 03:56:52.842445] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd3f0000c00 is same with the state(5) to be set 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 starting I/O failed: -6 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 starting I/O failed: -6 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 starting I/O failed: -6 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 starting I/O failed: -6 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 starting I/O failed: -6 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 starting I/O failed: -6 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 starting I/O failed: -6 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 starting I/O failed: -6 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 starting I/O failed: -6 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 starting I/O failed: -6 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 starting I/O failed: -6 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 starting I/O failed: -6 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 starting I/O failed: -6 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 starting I/O failed: -6 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 starting I/O failed: -6 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 starting I/O failed: -6 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 starting I/O failed: -6 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Write completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 starting I/O failed: -6 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 starting I/O failed: -6 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 starting I/O failed: -6 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 Read completed with error (sct=0, sc=8) 00:13:17.829 starting I/O failed: -6 00:13:17.830 Write completed with error (sct=0, sc=8) 00:13:17.830 Read completed with error (sct=0, sc=8) 00:13:17.830 starting I/O failed: -6 00:13:17.830 Read completed with error (sct=0, sc=8) 00:13:17.830 Read completed with error (sct=0, sc=8) 00:13:17.830 starting I/O failed: -6 00:13:17.830 Read completed with error (sct=0, sc=8) 00:13:17.830 Read completed with error (sct=0, sc=8) 00:13:17.830 starting I/O failed: -6 00:13:17.830 Write completed with error (sct=0, sc=8) 00:13:17.830 Write completed with error (sct=0, sc=8) 00:13:17.830 starting I/O failed: -6 00:13:17.830 Read completed with error (sct=0, sc=8) 00:13:17.830 Write completed with error (sct=0, sc=8) 00:13:17.830 starting I/O failed: -6 00:13:17.830 Write completed with error (sct=0, sc=8) 00:13:17.830 Write completed with error (sct=0, sc=8) 00:13:17.830 starting I/O failed: -6 00:13:17.830 Read completed with error (sct=0, sc=8) 00:13:17.830 Read completed with error (sct=0, sc=8) 00:13:17.830 starting I/O failed: -6 00:13:17.830 Read completed with error (sct=0, sc=8) 00:13:17.830 Read completed with error (sct=0, sc=8) 00:13:17.830 starting I/O failed: -6 00:13:17.830 Read completed with error (sct=0, sc=8) 00:13:17.830 Read completed with error (sct=0, sc=8) 00:13:17.830 starting I/O failed: -6 00:13:17.830 [2024-11-08 03:56:52.844052] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fa7d0 is same with the state(5) to be set 00:13:18.766 [2024-11-08 03:56:53.814180] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc5a0 is same with the state(5) to be set 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 [2024-11-08 03:56:53.840177] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fb950 is same with the state(5) to be set 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 [2024-11-08 03:56:53.841795] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21faa80 is same with the state(5) to be set 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 [2024-11-08 03:56:53.842358] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd3f000bf20 is same with the state(5) to be set 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 03:56:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Read completed with error (sct=0, sc=8) 00:13:18.766 Write completed with error (sct=0, sc=8) 00:13:18.767 Read completed with error (sct=0, sc=8) 00:13:18.767 [2024-11-08 03:56:53.843124] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd3f000c480 is same with the state(5) to be set 00:13:18.767 03:56:53 -- target/delete_subsystem.sh@34 -- # delay=0 00:13:18.767 03:56:53 -- target/delete_subsystem.sh@35 -- # kill -0 70619 00:13:18.767 03:56:53 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:18.767 [2024-11-08 03:56:53.844586] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21fc5a0 (9): Bad file descriptor 00:13:18.767 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:18.767 Initializing NVMe Controllers 00:13:18.767 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:18.767 Controller IO queue size 128, less than required. 00:13:18.767 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:18.767 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:18.767 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:18.767 Initialization complete. Launching workers. 00:13:18.767 ======================================================== 00:13:18.767 Latency(us) 00:13:18.767 Device Information : IOPS MiB/s Average min max 00:13:18.767 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 189.69 0.09 896864.35 1555.60 1016799.99 00:13:18.767 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 176.32 0.09 883898.05 1245.26 1016727.70 00:13:18.767 ======================================================== 00:13:18.767 Total : 366.01 0.18 890618.07 1245.26 1016799.99 00:13:18.767 00:13:19.335 03:56:54 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:19.335 03:56:54 -- target/delete_subsystem.sh@35 -- # kill -0 70619 00:13:19.335 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (70619) - No such process 00:13:19.335 03:56:54 -- target/delete_subsystem.sh@45 -- # NOT wait 70619 00:13:19.335 03:56:54 -- common/autotest_common.sh@650 -- # local es=0 00:13:19.335 03:56:54 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 70619 00:13:19.335 03:56:54 -- common/autotest_common.sh@638 -- # local arg=wait 00:13:19.335 03:56:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:19.335 03:56:54 -- common/autotest_common.sh@642 -- # type -t wait 00:13:19.335 03:56:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:19.335 03:56:54 -- common/autotest_common.sh@653 -- # wait 70619 00:13:19.335 03:56:54 -- common/autotest_common.sh@653 -- # es=1 00:13:19.335 03:56:54 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:19.335 03:56:54 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:19.335 03:56:54 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:19.335 03:56:54 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:19.335 03:56:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.335 03:56:54 -- common/autotest_common.sh@10 -- # set +x 00:13:19.335 03:56:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.335 03:56:54 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:19.335 03:56:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.335 03:56:54 -- common/autotest_common.sh@10 -- # set +x 00:13:19.335 [2024-11-08 03:56:54.366118] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:19.335 03:56:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.335 03:56:54 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:19.335 03:56:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.335 03:56:54 -- common/autotest_common.sh@10 -- # set +x 00:13:19.335 03:56:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.335 03:56:54 -- target/delete_subsystem.sh@54 -- # perf_pid=70670 00:13:19.335 03:56:54 -- target/delete_subsystem.sh@56 -- # delay=0 00:13:19.335 03:56:54 -- target/delete_subsystem.sh@57 -- # kill -0 70670 00:13:19.335 03:56:54 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:19.335 03:56:54 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:19.594 [2024-11-08 03:56:54.546186] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:19.852 03:56:54 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:19.852 03:56:54 -- target/delete_subsystem.sh@57 -- # kill -0 70670 00:13:19.852 03:56:54 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:20.419 03:56:55 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:20.419 03:56:55 -- target/delete_subsystem.sh@57 -- # kill -0 70670 00:13:20.419 03:56:55 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:20.985 03:56:55 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:20.985 03:56:55 -- target/delete_subsystem.sh@57 -- # kill -0 70670 00:13:20.985 03:56:55 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:21.552 03:56:56 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:21.552 03:56:56 -- target/delete_subsystem.sh@57 -- # kill -0 70670 00:13:21.552 03:56:56 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:21.810 03:56:56 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:21.810 03:56:56 -- target/delete_subsystem.sh@57 -- # kill -0 70670 00:13:21.810 03:56:56 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:22.376 03:56:57 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:22.376 03:56:57 -- target/delete_subsystem.sh@57 -- # kill -0 70670 00:13:22.376 03:56:57 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:22.635 Initializing NVMe Controllers 00:13:22.635 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:22.635 Controller IO queue size 128, less than required. 00:13:22.635 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:22.635 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:22.635 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:22.635 Initialization complete. Launching workers. 00:13:22.635 ======================================================== 00:13:22.635 Latency(us) 00:13:22.635 Device Information : IOPS MiB/s Average min max 00:13:22.635 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003135.21 1000150.60 1010118.75 00:13:22.635 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005212.21 1000197.16 1011906.11 00:13:22.635 ======================================================== 00:13:22.635 Total : 256.00 0.12 1004173.71 1000150.60 1011906.11 00:13:22.635 00:13:22.893 03:56:57 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:22.893 03:56:57 -- target/delete_subsystem.sh@57 -- # kill -0 70670 00:13:22.893 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (70670) - No such process 00:13:22.893 03:56:57 -- target/delete_subsystem.sh@67 -- # wait 70670 00:13:22.893 03:56:57 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:22.893 03:56:57 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:13:22.893 03:56:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:22.893 03:56:57 -- nvmf/common.sh@116 -- # sync 00:13:22.893 03:56:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:22.893 03:56:57 -- nvmf/common.sh@119 -- # set +e 00:13:22.893 03:56:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:22.893 03:56:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:22.893 rmmod nvme_tcp 00:13:22.894 rmmod nvme_fabrics 00:13:23.152 rmmod nvme_keyring 00:13:23.152 03:56:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:23.152 03:56:58 -- nvmf/common.sh@123 -- # set -e 00:13:23.152 03:56:58 -- nvmf/common.sh@124 -- # return 0 00:13:23.152 03:56:58 -- nvmf/common.sh@477 -- # '[' -n 70568 ']' 00:13:23.152 03:56:58 -- nvmf/common.sh@478 -- # killprocess 70568 00:13:23.152 03:56:58 -- common/autotest_common.sh@936 -- # '[' -z 70568 ']' 00:13:23.152 03:56:58 -- common/autotest_common.sh@940 -- # kill -0 70568 00:13:23.152 03:56:58 -- common/autotest_common.sh@941 -- # uname 00:13:23.152 03:56:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:23.152 03:56:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70568 00:13:23.152 03:56:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:23.152 03:56:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:23.152 killing process with pid 70568 00:13:23.152 03:56:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70568' 00:13:23.152 03:56:58 -- common/autotest_common.sh@955 -- # kill 70568 00:13:23.152 03:56:58 -- common/autotest_common.sh@960 -- # wait 70568 00:13:23.410 03:56:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:23.410 03:56:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:23.410 03:56:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:23.410 03:56:58 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:23.410 03:56:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:23.410 03:56:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.410 03:56:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:23.410 03:56:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.410 03:56:58 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:23.410 00:13:23.410 real 0m9.522s 00:13:23.410 user 0m28.826s 00:13:23.410 sys 0m1.616s 00:13:23.410 03:56:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:23.410 03:56:58 -- common/autotest_common.sh@10 -- # set +x 00:13:23.410 ************************************ 00:13:23.410 END TEST nvmf_delete_subsystem 00:13:23.410 ************************************ 00:13:23.410 03:56:58 -- nvmf/nvmf.sh@36 -- # [[ 0 -eq 1 ]] 00:13:23.410 03:56:58 -- nvmf/nvmf.sh@39 -- # [[ 1 -eq 1 ]] 00:13:23.410 03:56:58 -- nvmf/nvmf.sh@40 -- # run_test nvmf_vfio_user /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:23.410 03:56:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:23.410 03:56:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:23.410 03:56:58 -- common/autotest_common.sh@10 -- # set +x 00:13:23.410 ************************************ 00:13:23.410 START TEST nvmf_vfio_user 00:13:23.410 ************************************ 00:13:23.410 03:56:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:23.670 * Looking for test storage... 00:13:23.670 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:23.670 03:56:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:23.670 03:56:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:23.670 03:56:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:23.670 03:56:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:23.670 03:56:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:23.670 03:56:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:23.670 03:56:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:23.670 03:56:58 -- scripts/common.sh@335 -- # IFS=.-: 00:13:23.670 03:56:58 -- scripts/common.sh@335 -- # read -ra ver1 00:13:23.670 03:56:58 -- scripts/common.sh@336 -- # IFS=.-: 00:13:23.670 03:56:58 -- scripts/common.sh@336 -- # read -ra ver2 00:13:23.670 03:56:58 -- scripts/common.sh@337 -- # local 'op=<' 00:13:23.670 03:56:58 -- scripts/common.sh@339 -- # ver1_l=2 00:13:23.670 03:56:58 -- scripts/common.sh@340 -- # ver2_l=1 00:13:23.670 03:56:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:23.670 03:56:58 -- scripts/common.sh@343 -- # case "$op" in 00:13:23.670 03:56:58 -- scripts/common.sh@344 -- # : 1 00:13:23.670 03:56:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:23.670 03:56:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:23.670 03:56:58 -- scripts/common.sh@364 -- # decimal 1 00:13:23.670 03:56:58 -- scripts/common.sh@352 -- # local d=1 00:13:23.670 03:56:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:23.670 03:56:58 -- scripts/common.sh@354 -- # echo 1 00:13:23.670 03:56:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:23.670 03:56:58 -- scripts/common.sh@365 -- # decimal 2 00:13:23.670 03:56:58 -- scripts/common.sh@352 -- # local d=2 00:13:23.670 03:56:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:23.670 03:56:58 -- scripts/common.sh@354 -- # echo 2 00:13:23.670 03:56:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:23.670 03:56:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:23.670 03:56:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:23.670 03:56:58 -- scripts/common.sh@367 -- # return 0 00:13:23.670 03:56:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:23.670 03:56:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:23.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.670 --rc genhtml_branch_coverage=1 00:13:23.670 --rc genhtml_function_coverage=1 00:13:23.670 --rc genhtml_legend=1 00:13:23.670 --rc geninfo_all_blocks=1 00:13:23.670 --rc geninfo_unexecuted_blocks=1 00:13:23.670 00:13:23.670 ' 00:13:23.670 03:56:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:23.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.670 --rc genhtml_branch_coverage=1 00:13:23.670 --rc genhtml_function_coverage=1 00:13:23.670 --rc genhtml_legend=1 00:13:23.670 --rc geninfo_all_blocks=1 00:13:23.670 --rc geninfo_unexecuted_blocks=1 00:13:23.670 00:13:23.670 ' 00:13:23.670 03:56:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:23.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.670 --rc genhtml_branch_coverage=1 00:13:23.670 --rc genhtml_function_coverage=1 00:13:23.670 --rc genhtml_legend=1 00:13:23.670 --rc geninfo_all_blocks=1 00:13:23.670 --rc geninfo_unexecuted_blocks=1 00:13:23.670 00:13:23.670 ' 00:13:23.670 03:56:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:23.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.670 --rc genhtml_branch_coverage=1 00:13:23.670 --rc genhtml_function_coverage=1 00:13:23.670 --rc genhtml_legend=1 00:13:23.670 --rc geninfo_all_blocks=1 00:13:23.670 --rc geninfo_unexecuted_blocks=1 00:13:23.670 00:13:23.670 ' 00:13:23.670 03:56:58 -- target/nvmf_vfio_user.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:23.670 03:56:58 -- nvmf/common.sh@7 -- # uname -s 00:13:23.670 03:56:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:23.670 03:56:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:23.670 03:56:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:23.670 03:56:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:23.670 03:56:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:23.670 03:56:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:23.670 03:56:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:23.670 03:56:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:23.670 03:56:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:23.670 03:56:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:23.670 03:56:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:13:23.670 03:56:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:13:23.670 03:56:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:23.670 03:56:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:23.670 03:56:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:23.670 03:56:58 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:23.670 03:56:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:23.670 03:56:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:23.670 03:56:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:23.670 03:56:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.670 03:56:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.670 03:56:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.670 03:56:58 -- paths/export.sh@5 -- # export PATH 00:13:23.670 03:56:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.670 03:56:58 -- nvmf/common.sh@46 -- # : 0 00:13:23.670 03:56:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:23.670 03:56:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:23.670 03:56:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:23.671 03:56:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:23.671 03:56:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:23.671 03:56:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:23.671 03:56:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:23.671 03:56:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:23.671 03:56:58 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:23.671 03:56:58 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:23.671 03:56:58 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:23.671 03:56:58 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:23.671 03:56:58 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:23.671 03:56:58 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:23.671 03:56:58 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:23.671 03:56:58 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:23.671 03:56:58 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:23.671 03:56:58 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:23.671 03:56:58 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=70800 00:13:23.671 03:56:58 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 70800' 00:13:23.671 Process pid: 70800 00:13:23.671 03:56:58 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:23.671 03:56:58 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 70800 00:13:23.671 03:56:58 -- common/autotest_common.sh@829 -- # '[' -z 70800 ']' 00:13:23.671 03:56:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.671 03:56:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:23.671 03:56:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.671 03:56:58 -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:23.671 03:56:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:23.671 03:56:58 -- common/autotest_common.sh@10 -- # set +x 00:13:23.671 [2024-11-08 03:56:58.765644] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:23.671 [2024-11-08 03:56:58.765752] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:23.929 [2024-11-08 03:56:58.905205] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:23.929 [2024-11-08 03:56:59.024013] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:23.929 [2024-11-08 03:56:59.024392] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:23.929 [2024-11-08 03:56:59.024459] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:23.929 [2024-11-08 03:56:59.024619] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:23.929 [2024-11-08 03:56:59.024742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.929 [2024-11-08 03:56:59.024838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.929 [2024-11-08 03:56:59.025469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:23.929 [2024-11-08 03:56:59.025479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.865 03:56:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:24.865 03:56:59 -- common/autotest_common.sh@862 -- # return 0 00:13:24.865 03:56:59 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:25.799 03:57:00 -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:26.058 03:57:01 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:26.058 03:57:01 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:26.058 03:57:01 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:26.058 03:57:01 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:26.058 03:57:01 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:26.316 Malloc1 00:13:26.316 03:57:01 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:26.581 03:57:01 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:26.840 03:57:01 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:27.099 03:57:02 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:27.099 03:57:02 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:27.099 03:57:02 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:27.666 Malloc2 00:13:27.666 03:57:02 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:27.666 03:57:02 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:27.924 03:57:02 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:28.182 03:57:03 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:28.182 03:57:03 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:28.182 03:57:03 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:28.182 03:57:03 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:28.182 03:57:03 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:28.182 03:57:03 -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:28.182 [2024-11-08 03:57:03.280170] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:28.182 [2024-11-08 03:57:03.280221] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70938 ] 00:13:28.441 [2024-11-08 03:57:03.421956] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:28.441 [2024-11-08 03:57:03.430893] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:28.441 [2024-11-08 03:57:03.430944] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f2e1baed000 00:13:28.441 [2024-11-08 03:57:03.431880] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:28.441 [2024-11-08 03:57:03.432869] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:28.441 [2024-11-08 03:57:03.433894] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:28.441 [2024-11-08 03:57:03.434908] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:28.441 [2024-11-08 03:57:03.435902] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:28.441 [2024-11-08 03:57:03.436912] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:28.441 [2024-11-08 03:57:03.437925] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:28.442 [2024-11-08 03:57:03.438969] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:28.442 [2024-11-08 03:57:03.439934] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:28.442 [2024-11-08 03:57:03.439980] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f2e1b175000 00:13:28.442 [2024-11-08 03:57:03.441221] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:28.442 [2024-11-08 03:57:03.457616] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:28.442 [2024-11-08 03:57:03.457666] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:13:28.442 [2024-11-08 03:57:03.463071] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:28.442 [2024-11-08 03:57:03.463161] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:28.442 [2024-11-08 03:57:03.463276] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:13:28.442 [2024-11-08 03:57:03.463309] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:13:28.442 [2024-11-08 03:57:03.463316] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:13:28.442 [2024-11-08 03:57:03.464058] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:28.442 [2024-11-08 03:57:03.464097] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:13:28.442 [2024-11-08 03:57:03.464109] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:13:28.442 [2024-11-08 03:57:03.465073] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:28.442 [2024-11-08 03:57:03.465110] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:13:28.442 [2024-11-08 03:57:03.465123] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:13:28.442 [2024-11-08 03:57:03.466081] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:28.442 [2024-11-08 03:57:03.466121] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:28.442 [2024-11-08 03:57:03.467084] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:28.442 [2024-11-08 03:57:03.467107] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:13:28.442 [2024-11-08 03:57:03.467115] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:13:28.442 [2024-11-08 03:57:03.467124] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:28.442 [2024-11-08 03:57:03.467232] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:13:28.442 [2024-11-08 03:57:03.467238] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:28.442 [2024-11-08 03:57:03.467244] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:28.442 [2024-11-08 03:57:03.468092] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:28.442 [2024-11-08 03:57:03.469086] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:28.442 [2024-11-08 03:57:03.470096] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:28.442 [2024-11-08 03:57:03.471139] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:28.442 [2024-11-08 03:57:03.472098] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:28.442 [2024-11-08 03:57:03.472121] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:28.442 [2024-11-08 03:57:03.472139] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:13:28.442 [2024-11-08 03:57:03.472162] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:13:28.442 [2024-11-08 03:57:03.472181] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:13:28.442 [2024-11-08 03:57:03.472201] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:28.442 [2024-11-08 03:57:03.472209] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:28.442 [2024-11-08 03:57:03.472229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:28.442 [2024-11-08 03:57:03.472292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:28.442 [2024-11-08 03:57:03.472306] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:13:28.442 [2024-11-08 03:57:03.472312] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:13:28.442 [2024-11-08 03:57:03.472317] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:13:28.442 [2024-11-08 03:57:03.472322] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:28.442 [2024-11-08 03:57:03.472328] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:13:28.442 [2024-11-08 03:57:03.472333] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:13:28.442 [2024-11-08 03:57:03.472338] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:13:28.442 [2024-11-08 03:57:03.472352] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:13:28.442 [2024-11-08 03:57:03.472365] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:28.442 [2024-11-08 03:57:03.472396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:28.442 [2024-11-08 03:57:03.472412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:28.442 [2024-11-08 03:57:03.472437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:28.442 [2024-11-08 03:57:03.472447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:28.442 [2024-11-08 03:57:03.472457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:28.442 [2024-11-08 03:57:03.472463] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:28.442 [2024-11-08 03:57:03.472479] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:28.442 [2024-11-08 03:57:03.472491] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:28.442 [2024-11-08 03:57:03.472502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:28.442 [2024-11-08 03:57:03.472510] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:13:28.442 [2024-11-08 03:57:03.472516] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:28.442 [2024-11-08 03:57:03.472524] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:13:28.442 [2024-11-08 03:57:03.472535] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:13:28.442 [2024-11-08 03:57:03.472546] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:28.442 [2024-11-08 03:57:03.472562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:28.442 [2024-11-08 03:57:03.472627] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:13:28.442 [2024-11-08 03:57:03.472639] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:13:28.442 [2024-11-08 03:57:03.472649] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:28.442 [2024-11-08 03:57:03.472654] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:28.442 [2024-11-08 03:57:03.472662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:28.442 [2024-11-08 03:57:03.472681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:28.442 [2024-11-08 03:57:03.472699] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:13:28.442 [2024-11-08 03:57:03.472713] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:13:28.442 [2024-11-08 03:57:03.472723] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:13:28.442 [2024-11-08 03:57:03.472732] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:28.442 [2024-11-08 03:57:03.472738] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:28.442 [2024-11-08 03:57:03.472745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:28.442 [2024-11-08 03:57:03.472779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:28.442 [2024-11-08 03:57:03.472799] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:28.442 [2024-11-08 03:57:03.472810] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:28.443 [2024-11-08 03:57:03.472819] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:28.443 [2024-11-08 03:57:03.472824] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:28.443 [2024-11-08 03:57:03.472831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:28.443 [2024-11-08 03:57:03.472848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:28.443 [2024-11-08 03:57:03.472858] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:28.443 [2024-11-08 03:57:03.472867] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:13:28.443 [2024-11-08 03:57:03.472878] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:13:28.443 [2024-11-08 03:57:03.472886] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:28.443 [2024-11-08 03:57:03.472893] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:13:28.443 [2024-11-08 03:57:03.472899] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:13:28.443 [2024-11-08 03:57:03.472904] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:13:28.443 [2024-11-08 03:57:03.472910] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:13:28.443 [2024-11-08 03:57:03.472935] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:28.443 [2024-11-08 03:57:03.472948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:28.443 [2024-11-08 03:57:03.472965] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:28.443 [2024-11-08 03:57:03.472977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:28.443 [2024-11-08 03:57:03.472991] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:28.443 [2024-11-08 03:57:03.473002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:28.443 [2024-11-08 03:57:03.473017] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:28.443 [2024-11-08 03:57:03.473036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:28.443 [2024-11-08 03:57:03.473052] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:28.443 [2024-11-08 03:57:03.473058] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:28.443 [2024-11-08 03:57:03.473062] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:28.443 [2024-11-08 03:57:03.473067] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:28.443 [2024-11-08 03:57:03.473074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:28.443 [2024-11-08 03:57:03.473083] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:28.443 [2024-11-08 03:57:03.473088] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:28.443 [2024-11-08 03:57:03.473095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:28.443 [2024-11-08 03:57:03.473103] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:28.443 [2024-11-08 03:57:03.473108] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:28.443 [2024-11-08 03:57:03.473115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:28.443 [2024-11-08 03:57:03.473124] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:28.443 [2024-11-08 03:57:03.473129] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:28.443 [2024-11-08 03:57:03.473135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:28.443 [2024-11-08 03:57:03.473144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:28.443 [2024-11-08 03:57:03.473163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:28.443 [2024-11-08 03:57:03.473175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:28.443 [2024-11-08 03:57:03.473185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:28.443 ===================================================== 00:13:28.443 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:28.443 ===================================================== 00:13:28.443 Controller Capabilities/Features 00:13:28.443 ================================ 00:13:28.443 Vendor ID: 4e58 00:13:28.443 Subsystem Vendor ID: 4e58 00:13:28.443 Serial Number: SPDK1 00:13:28.443 Model Number: SPDK bdev Controller 00:13:28.443 Firmware Version: 24.01.1 00:13:28.443 Recommended Arb Burst: 6 00:13:28.443 IEEE OUI Identifier: 8d 6b 50 00:13:28.443 Multi-path I/O 00:13:28.443 May have multiple subsystem ports: Yes 00:13:28.443 May have multiple controllers: Yes 00:13:28.443 Associated with SR-IOV VF: No 00:13:28.443 Max Data Transfer Size: 131072 00:13:28.443 Max Number of Namespaces: 32 00:13:28.443 Max Number of I/O Queues: 127 00:13:28.443 NVMe Specification Version (VS): 1.3 00:13:28.443 NVMe Specification Version (Identify): 1.3 00:13:28.443 Maximum Queue Entries: 256 00:13:28.443 Contiguous Queues Required: Yes 00:13:28.443 Arbitration Mechanisms Supported 00:13:28.443 Weighted Round Robin: Not Supported 00:13:28.443 Vendor Specific: Not Supported 00:13:28.443 Reset Timeout: 15000 ms 00:13:28.443 Doorbell Stride: 4 bytes 00:13:28.443 NVM Subsystem Reset: Not Supported 00:13:28.443 Command Sets Supported 00:13:28.443 NVM Command Set: Supported 00:13:28.443 Boot Partition: Not Supported 00:13:28.443 Memory Page Size Minimum: 4096 bytes 00:13:28.443 Memory Page Size Maximum: 4096 bytes 00:13:28.443 Persistent Memory Region: Not Supported 00:13:28.443 Optional Asynchronous Events Supported 00:13:28.443 Namespace Attribute Notices: Supported 00:13:28.443 Firmware Activation Notices: Not Supported 00:13:28.443 ANA Change Notices: Not Supported 00:13:28.443 PLE Aggregate Log Change Notices: Not Supported 00:13:28.443 LBA Status Info Alert Notices: Not Supported 00:13:28.443 EGE Aggregate Log Change Notices: Not Supported 00:13:28.443 Normal NVM Subsystem Shutdown event: Not Supported 00:13:28.443 Zone Descriptor Change Notices: Not Supported 00:13:28.443 Discovery Log Change Notices: Not Supported 00:13:28.443 Controller Attributes 00:13:28.443 128-bit Host Identifier: Supported 00:13:28.443 Non-Operational Permissive Mode: Not Supported 00:13:28.443 NVM Sets: Not Supported 00:13:28.443 Read Recovery Levels: Not Supported 00:13:28.443 Endurance Groups: Not Supported 00:13:28.443 Predictable Latency Mode: Not Supported 00:13:28.443 Traffic Based Keep ALive: Not Supported 00:13:28.443 Namespace Granularity: Not Supported 00:13:28.443 SQ Associations: Not Supported 00:13:28.443 UUID List: Not Supported 00:13:28.443 Multi-Domain Subsystem: Not Supported 00:13:28.443 Fixed Capacity Management: Not Supported 00:13:28.443 Variable Capacity Management: Not Supported 00:13:28.443 Delete Endurance Group: Not Supported 00:13:28.443 Delete NVM Set: Not Supported 00:13:28.443 Extended LBA Formats Supported: Not Supported 00:13:28.443 Flexible Data Placement Supported: Not Supported 00:13:28.443 00:13:28.443 Controller Memory Buffer Support 00:13:28.443 ================================ 00:13:28.443 Supported: No 00:13:28.443 00:13:28.443 Persistent Memory Region Support 00:13:28.443 ================================ 00:13:28.443 Supported: No 00:13:28.443 00:13:28.443 Admin Command Set Attributes 00:13:28.443 ============================ 00:13:28.443 Security Send/Receive: Not Supported 00:13:28.443 Format NVM: Not Supported 00:13:28.443 Firmware Activate/Download: Not Supported 00:13:28.443 Namespace Management: Not Supported 00:13:28.443 Device Self-Test: Not Supported 00:13:28.443 Directives: Not Supported 00:13:28.443 NVMe-MI: Not Supported 00:13:28.443 Virtualization Management: Not Supported 00:13:28.443 Doorbell Buffer Config: Not Supported 00:13:28.443 Get LBA Status Capability: Not Supported 00:13:28.443 Command & Feature Lockdown Capability: Not Supported 00:13:28.443 Abort Command Limit: 4 00:13:28.443 Async Event Request Limit: 4 00:13:28.443 Number of Firmware Slots: N/A 00:13:28.443 Firmware Slot 1 Read-Only: N/A 00:13:28.443 Firmware Activation Without Reset: N/A 00:13:28.443 Multiple Update Detection Support: N/A 00:13:28.443 Firmware Update Granularity: No Information Provided 00:13:28.443 Per-Namespace SMART Log: No 00:13:28.443 Asymmetric Namespace Access Log Page: Not Supported 00:13:28.443 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:28.443 Command Effects Log Page: Supported 00:13:28.443 Get Log Page Extended Data: Supported 00:13:28.443 Telemetry Log Pages: Not Supported 00:13:28.443 Persistent Event Log Pages: Not Supported 00:13:28.443 Supported Log Pages Log Page: May Support 00:13:28.443 Commands Supported & Effects Log Page: Not Supported 00:13:28.443 Feature Identifiers & Effects Log Page:May Support 00:13:28.443 NVMe-MI Commands & Effects Log Page: May Support 00:13:28.443 Data Area 4 for Telemetry Log: Not Supported 00:13:28.443 Error Log Page Entries Supported: 128 00:13:28.444 Keep Alive: Supported 00:13:28.444 Keep Alive Granularity: 10000 ms 00:13:28.444 00:13:28.444 NVM Command Set Attributes 00:13:28.444 ========================== 00:13:28.444 Submission Queue Entry Size 00:13:28.444 Max: 64 00:13:28.444 Min: 64 00:13:28.444 Completion Queue Entry Size 00:13:28.444 Max: 16 00:13:28.444 Min: 16 00:13:28.444 Number of Namespaces: 32 00:13:28.444 Compare Command: Supported 00:13:28.444 Write Uncorrectable Command: Not Supported 00:13:28.444 Dataset Management Command: Supported 00:13:28.444 Write Zeroes Command: Supported 00:13:28.444 Set Features Save Field: Not Supported 00:13:28.444 Reservations: Not Supported 00:13:28.444 Timestamp: Not Supported 00:13:28.444 Copy: Supported 00:13:28.444 Volatile Write Cache: Present 00:13:28.444 Atomic Write Unit (Normal): 1 00:13:28.444 Atomic Write Unit (PFail): 1 00:13:28.444 Atomic Compare & Write Unit: 1 00:13:28.444 Fused Compare & Write: Supported 00:13:28.444 Scatter-Gather List 00:13:28.444 SGL Command Set: Supported (Dword aligned) 00:13:28.444 SGL Keyed: Not Supported 00:13:28.444 SGL Bit Bucket Descriptor: Not Supported 00:13:28.444 SGL Metadata Pointer: Not Supported 00:13:28.444 Oversized SGL: Not Supported 00:13:28.444 SGL Metadata Address: Not Supported 00:13:28.444 SGL Offset: Not Supported 00:13:28.444 Transport SGL Data Block: Not Supported 00:13:28.444 Replay Protected Memory Block: Not Supported 00:13:28.444 00:13:28.444 Firmware Slot Information 00:13:28.444 ========================= 00:13:28.444 Active slot: 1 00:13:28.444 Slot 1 Firmware Revision: 24.01.1 00:13:28.444 00:13:28.444 00:13:28.444 Commands Supported and Effects 00:13:28.444 ============================== 00:13:28.444 Admin Commands 00:13:28.444 -------------- 00:13:28.444 Get Log Page (02h): Supported 00:13:28.444 Identify (06h): Supported 00:13:28.444 Abort (08h): Supported 00:13:28.444 Set Features (09h): Supported 00:13:28.444 Get Features (0Ah): Supported 00:13:28.444 Asynchronous Event Request (0Ch): Supported 00:13:28.444 Keep Alive (18h): Supported 00:13:28.444 I/O Commands 00:13:28.444 ------------ 00:13:28.444 Flush (00h): Supported LBA-Change 00:13:28.444 Write (01h): Supported LBA-Change 00:13:28.444 Read (02h): Supported 00:13:28.444 Compare (05h): Supported 00:13:28.444 Write Zeroes (08h): Supported LBA-Change 00:13:28.444 Dataset Management (09h): Supported LBA-Change 00:13:28.444 Copy (19h): Supported LBA-Change 00:13:28.444 Unknown (79h): Supported LBA-Change 00:13:28.444 Unknown (7Ah): Supported 00:13:28.444 00:13:28.444 Error Log 00:13:28.444 ========= 00:13:28.444 00:13:28.444 Arbitration 00:13:28.444 =========== 00:13:28.444 Arbitration Burst: 1 00:13:28.444 00:13:28.444 Power Management 00:13:28.444 ================ 00:13:28.444 Number of Power States: 1 00:13:28.444 Current Power State: Power State #0 00:13:28.444 Power State #0: 00:13:28.444 Max Power: 0.00 W 00:13:28.444 Non-Operational State: Operational 00:13:28.444 Entry Latency: Not Reported 00:13:28.444 Exit Latency: Not Reported 00:13:28.444 Relative Read Throughput: 0 00:13:28.444 Relative Read Latency: 0 00:13:28.444 Relative Write Throughput: 0 00:13:28.444 Relative Write Latency: 0 00:13:28.444 Idle Power: Not Reported 00:13:28.444 Active Power: Not Reported 00:13:28.444 Non-Operational Permissive Mode: Not Supported 00:13:28.444 00:13:28.444 Health Information 00:13:28.444 ================== 00:13:28.444 Critical Warnings: 00:13:28.444 Available Spare Space: OK 00:13:28.444 Temperature: OK 00:13:28.444 Device Reliability: OK 00:13:28.444 Read Only: No 00:13:28.444 Volatile Memory Backup: OK 00:13:28.444 Current Temperature: 0 Kelvin[2024-11-08 03:57:03.473329] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:28.444 [2024-11-08 03:57:03.473347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:28.444 [2024-11-08 03:57:03.473385] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:13:28.444 [2024-11-08 03:57:03.473398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:28.444 [2024-11-08 03:57:03.473406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:28.444 [2024-11-08 03:57:03.473429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:28.444 [2024-11-08 03:57:03.473438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:28.444 [2024-11-08 03:57:03.474131] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:28.444 [2024-11-08 03:57:03.474161] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:28.444 [2024-11-08 03:57:03.475172] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:13:28.444 [2024-11-08 03:57:03.475190] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:13:28.444 [2024-11-08 03:57:03.476127] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:28.444 [2024-11-08 03:57:03.476167] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:13:28.444 [2024-11-08 03:57:03.476400] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:28.444 [2024-11-08 03:57:03.479450] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:28.444 (-273 Celsius) 00:13:28.444 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:28.444 Available Spare: 0% 00:13:28.444 Available Spare Threshold: 0% 00:13:28.444 Life Percentage Used: 0% 00:13:28.444 Data Units Read: 0 00:13:28.444 Data Units Written: 0 00:13:28.444 Host Read Commands: 0 00:13:28.444 Host Write Commands: 0 00:13:28.444 Controller Busy Time: 0 minutes 00:13:28.444 Power Cycles: 0 00:13:28.444 Power On Hours: 0 hours 00:13:28.444 Unsafe Shutdowns: 0 00:13:28.444 Unrecoverable Media Errors: 0 00:13:28.444 Lifetime Error Log Entries: 0 00:13:28.444 Warning Temperature Time: 0 minutes 00:13:28.444 Critical Temperature Time: 0 minutes 00:13:28.444 00:13:28.444 Number of Queues 00:13:28.444 ================ 00:13:28.444 Number of I/O Submission Queues: 127 00:13:28.444 Number of I/O Completion Queues: 127 00:13:28.444 00:13:28.444 Active Namespaces 00:13:28.444 ================= 00:13:28.444 Namespace ID:1 00:13:28.444 Error Recovery Timeout: Unlimited 00:13:28.444 Command Set Identifier: NVM (00h) 00:13:28.444 Deallocate: Supported 00:13:28.444 Deallocated/Unwritten Error: Not Supported 00:13:28.444 Deallocated Read Value: Unknown 00:13:28.444 Deallocate in Write Zeroes: Not Supported 00:13:28.444 Deallocated Guard Field: 0xFFFF 00:13:28.444 Flush: Supported 00:13:28.444 Reservation: Supported 00:13:28.444 Namespace Sharing Capabilities: Multiple Controllers 00:13:28.444 Size (in LBAs): 131072 (0GiB) 00:13:28.444 Capacity (in LBAs): 131072 (0GiB) 00:13:28.444 Utilization (in LBAs): 131072 (0GiB) 00:13:28.444 NGUID: A944AA8C2A6B4291BE2F8CCD4840F1C1 00:13:28.444 UUID: a944aa8c-2a6b-4291-be2f-8ccd4840f1c1 00:13:28.444 Thin Provisioning: Not Supported 00:13:28.444 Per-NS Atomic Units: Yes 00:13:28.444 Atomic Boundary Size (Normal): 0 00:13:28.444 Atomic Boundary Size (PFail): 0 00:13:28.444 Atomic Boundary Offset: 0 00:13:28.444 Maximum Single Source Range Length: 65535 00:13:28.444 Maximum Copy Length: 65535 00:13:28.444 Maximum Source Range Count: 1 00:13:28.444 NGUID/EUI64 Never Reused: No 00:13:28.444 Namespace Write Protected: No 00:13:28.444 Number of LBA Formats: 1 00:13:28.444 Current LBA Format: LBA Format #00 00:13:28.444 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:28.444 00:13:28.444 03:57:03 -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:33.773 Initializing NVMe Controllers 00:13:33.773 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:33.773 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:33.773 Initialization complete. Launching workers. 00:13:33.773 ======================================================== 00:13:33.773 Latency(us) 00:13:33.773 Device Information : IOPS MiB/s Average min max 00:13:33.773 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34810.40 135.98 3679.71 1010.79 10694.41 00:13:33.773 ======================================================== 00:13:33.773 Total : 34810.40 135.98 3679.71 1010.79 10694.41 00:13:33.773 00:13:33.773 03:57:08 -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:40.343 Initializing NVMe Controllers 00:13:40.343 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:40.343 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:40.343 Initialization complete. Launching workers. 00:13:40.343 ======================================================== 00:13:40.343 Latency(us) 00:13:40.343 Device Information : IOPS MiB/s Average min max 00:13:40.343 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15980.30 62.42 8009.12 5074.20 15999.95 00:13:40.343 ======================================================== 00:13:40.343 Total : 15980.30 62.42 8009.12 5074.20 15999.95 00:13:40.343 00:13:40.343 03:57:14 -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:44.542 Initializing NVMe Controllers 00:13:44.542 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:44.542 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:44.542 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:44.542 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:44.542 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:44.542 Initialization complete. Launching workers. 00:13:44.542 Starting thread on core 2 00:13:44.542 Starting thread on core 3 00:13:44.543 Starting thread on core 1 00:13:44.543 03:57:19 -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:47.860 Initializing NVMe Controllers 00:13:47.860 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:47.860 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:47.860 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:47.860 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:47.860 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:47.860 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:47.860 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:13:47.860 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:47.860 Initialization complete. Launching workers. 00:13:47.860 Starting thread on core 1 with urgent priority queue 00:13:47.860 Starting thread on core 2 with urgent priority queue 00:13:47.860 Starting thread on core 3 with urgent priority queue 00:13:47.860 Starting thread on core 0 with urgent priority queue 00:13:47.860 SPDK bdev Controller (SPDK1 ) core 0: 5429.67 IO/s 18.42 secs/100000 ios 00:13:47.860 SPDK bdev Controller (SPDK1 ) core 1: 4738.67 IO/s 21.10 secs/100000 ios 00:13:47.860 SPDK bdev Controller (SPDK1 ) core 2: 5048.33 IO/s 19.81 secs/100000 ios 00:13:47.860 SPDK bdev Controller (SPDK1 ) core 3: 4822.33 IO/s 20.74 secs/100000 ios 00:13:47.860 ======================================================== 00:13:47.860 00:13:47.860 03:57:22 -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:48.427 Initializing NVMe Controllers 00:13:48.427 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:48.427 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:48.427 Namespace ID: 1 size: 0GB 00:13:48.427 Initialization complete. 00:13:48.427 INFO: using host memory buffer for IO 00:13:48.427 Hello world! 00:13:48.427 03:57:23 -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:49.803 Initializing NVMe Controllers 00:13:49.803 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:49.803 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:49.803 Initialization complete. Launching workers. 00:13:49.803 submit (in ns) avg, min, max = 7634.3, 3758.2, 4045737.3 00:13:49.803 complete (in ns) avg, min, max = 35536.3, 2130.0, 7014123.6 00:13:49.803 00:13:49.803 Submit histogram 00:13:49.803 ================ 00:13:49.803 Range in us Cumulative Count 00:13:49.803 3.753 - 3.782: 0.3660% ( 37) 00:13:49.803 3.782 - 3.811: 1.6916% ( 134) 00:13:49.803 3.811 - 3.840: 6.7465% ( 511) 00:13:49.803 3.840 - 3.869: 16.7771% ( 1014) 00:13:49.803 3.869 - 3.898: 26.1252% ( 945) 00:13:49.803 3.898 - 3.927: 37.4122% ( 1141) 00:13:49.803 3.927 - 3.956: 48.7981% ( 1151) 00:13:49.803 3.956 - 3.985: 60.4016% ( 1173) 00:13:49.803 3.985 - 4.015: 67.8801% ( 756) 00:13:49.803 4.015 - 4.044: 74.4386% ( 663) 00:13:49.803 4.044 - 4.073: 79.0978% ( 471) 00:13:49.803 4.073 - 4.102: 82.2930% ( 323) 00:13:49.803 4.102 - 4.131: 84.5187% ( 225) 00:13:49.803 4.131 - 4.160: 85.8146% ( 131) 00:13:49.803 4.160 - 4.189: 86.9028% ( 110) 00:13:49.803 4.189 - 4.218: 87.9810% ( 109) 00:13:49.803 4.218 - 4.247: 89.3659% ( 140) 00:13:49.803 4.247 - 4.276: 90.7014% ( 135) 00:13:49.803 4.276 - 4.305: 92.3237% ( 164) 00:13:49.803 4.305 - 4.335: 93.9856% ( 168) 00:13:49.803 4.335 - 4.364: 95.4496% ( 148) 00:13:49.803 4.364 - 4.393: 96.2904% ( 85) 00:13:49.803 4.393 - 4.422: 96.9829% ( 70) 00:13:49.803 4.422 - 4.451: 97.3786% ( 40) 00:13:49.803 4.451 - 4.480: 97.7050% ( 33) 00:13:49.803 4.480 - 4.509: 97.9424% ( 24) 00:13:49.803 4.509 - 4.538: 98.0908% ( 15) 00:13:49.803 4.538 - 4.567: 98.1205% ( 3) 00:13:49.803 4.567 - 4.596: 98.1403% ( 2) 00:13:49.803 4.596 - 4.625: 98.1798% ( 4) 00:13:49.803 4.625 - 4.655: 98.1996% ( 2) 00:13:49.803 4.655 - 4.684: 98.2095% ( 1) 00:13:49.803 4.684 - 4.713: 98.2194% ( 1) 00:13:49.803 4.713 - 4.742: 98.2491% ( 3) 00:13:49.803 4.800 - 4.829: 98.2788% ( 3) 00:13:49.803 4.858 - 4.887: 98.3084% ( 3) 00:13:49.803 4.887 - 4.916: 98.3480% ( 4) 00:13:49.803 4.916 - 4.945: 98.4173% ( 7) 00:13:49.803 4.945 - 4.975: 98.4667% ( 5) 00:13:49.803 4.975 - 5.004: 98.5656% ( 10) 00:13:49.803 5.004 - 5.033: 98.6547% ( 9) 00:13:49.803 5.033 - 5.062: 98.7041% ( 5) 00:13:49.803 5.062 - 5.091: 98.7833% ( 8) 00:13:49.803 5.091 - 5.120: 98.8228% ( 4) 00:13:49.803 5.120 - 5.149: 98.9020% ( 8) 00:13:49.803 5.149 - 5.178: 98.9811% ( 8) 00:13:49.803 5.178 - 5.207: 99.0009% ( 2) 00:13:49.803 5.207 - 5.236: 99.0306% ( 3) 00:13:49.803 5.236 - 5.265: 99.0602% ( 3) 00:13:49.803 5.265 - 5.295: 99.0701% ( 1) 00:13:49.803 5.295 - 5.324: 99.1196% ( 5) 00:13:49.803 5.324 - 5.353: 99.1493% ( 3) 00:13:49.803 5.353 - 5.382: 99.2086% ( 6) 00:13:49.803 5.382 - 5.411: 99.2383% ( 3) 00:13:49.803 5.411 - 5.440: 99.2482% ( 1) 00:13:49.803 5.469 - 5.498: 99.2581% ( 1) 00:13:49.803 5.527 - 5.556: 99.2779% ( 2) 00:13:49.803 5.585 - 5.615: 99.2878% ( 1) 00:13:49.803 5.702 - 5.731: 99.3075% ( 2) 00:13:49.803 5.731 - 5.760: 99.3174% ( 1) 00:13:49.803 5.905 - 5.935: 99.3273% ( 1) 00:13:49.803 5.964 - 5.993: 99.3372% ( 1) 00:13:49.803 6.080 - 6.109: 99.3471% ( 1) 00:13:49.803 6.167 - 6.196: 99.3570% ( 1) 00:13:49.803 6.371 - 6.400: 99.3669% ( 1) 00:13:49.803 9.425 - 9.484: 99.3768% ( 1) 00:13:49.803 9.658 - 9.716: 99.3867% ( 1) 00:13:49.803 9.775 - 9.833: 99.4065% ( 2) 00:13:49.803 9.833 - 9.891: 99.4164% ( 1) 00:13:49.803 10.182 - 10.240: 99.4263% ( 1) 00:13:49.803 10.298 - 10.356: 99.4361% ( 1) 00:13:49.803 10.356 - 10.415: 99.4559% ( 2) 00:13:49.803 10.415 - 10.473: 99.4658% ( 1) 00:13:49.803 10.473 - 10.531: 99.4757% ( 1) 00:13:49.803 10.764 - 10.822: 99.4856% ( 1) 00:13:49.803 10.880 - 10.938: 99.4955% ( 1) 00:13:49.803 10.938 - 10.996: 99.5054% ( 1) 00:13:49.803 11.055 - 11.113: 99.5153% ( 1) 00:13:49.803 11.171 - 11.229: 99.5252% ( 1) 00:13:49.803 11.229 - 11.287: 99.5450% ( 2) 00:13:49.803 11.287 - 11.345: 99.5746% ( 3) 00:13:49.803 11.462 - 11.520: 99.5845% ( 1) 00:13:49.803 11.520 - 11.578: 99.5944% ( 1) 00:13:49.803 11.578 - 11.636: 99.6043% ( 1) 00:13:49.803 11.636 - 11.695: 99.6241% ( 2) 00:13:49.803 11.695 - 11.753: 99.6340% ( 1) 00:13:49.803 11.753 - 11.811: 99.6439% ( 1) 00:13:49.803 11.927 - 11.985: 99.6538% ( 1) 00:13:49.803 13.091 - 13.149: 99.6637% ( 1) 00:13:49.803 13.265 - 13.324: 99.6736% ( 1) 00:13:49.803 13.498 - 13.556: 99.6835% ( 1) 00:13:49.803 13.556 - 13.615: 99.6933% ( 1) 00:13:49.803 14.487 - 14.545: 99.7032% ( 1) 00:13:49.803 15.011 - 15.127: 99.7131% ( 1) 00:13:49.803 15.825 - 15.942: 99.7230% ( 1) 00:13:49.803 15.942 - 16.058: 99.7527% ( 3) 00:13:49.803 16.407 - 16.524: 99.7626% ( 1) 00:13:49.803 16.989 - 17.105: 99.7725% ( 1) 00:13:49.803 17.338 - 17.455: 99.7824% ( 1) 00:13:49.803 17.455 - 17.571: 99.7923% ( 1) 00:13:49.803 18.269 - 18.385: 99.8022% ( 1) 00:13:49.803 18.502 - 18.618: 99.8417% ( 4) 00:13:49.803 18.618 - 18.735: 99.8516% ( 1) 00:13:49.803 20.015 - 20.131: 99.8615% ( 1) 00:13:49.803 20.131 - 20.247: 99.8813% ( 2) 00:13:49.803 20.247 - 20.364: 99.8912% ( 1) 00:13:49.803 20.364 - 20.480: 99.9011% ( 1) 00:13:49.803 23.040 - 23.156: 99.9110% ( 1) 00:13:49.803 3961.949 - 3991.738: 99.9209% ( 1) 00:13:49.803 3991.738 - 4021.527: 99.9703% ( 5) 00:13:49.803 4021.527 - 4051.316: 100.0000% ( 3) 00:13:49.803 00:13:49.803 Complete histogram 00:13:49.803 ================== 00:13:49.803 Range in us Cumulative Count 00:13:49.803 2.124 - 2.138: 0.0396% ( 4) 00:13:49.803 2.138 - 2.153: 6.5189% ( 655) 00:13:49.803 2.153 - 2.167: 14.8185% ( 839) 00:13:49.803 2.167 - 2.182: 41.2207% ( 2669) 00:13:49.803 2.182 - 2.196: 72.3712% ( 3149) 00:13:49.803 2.196 - 2.211: 82.5601% ( 1030) 00:13:49.803 2.211 - 2.225: 84.9342% ( 240) 00:13:49.803 2.225 - 2.240: 86.3488% ( 143) 00:13:49.803 2.240 - 2.255: 89.3956% ( 308) 00:13:49.803 2.255 - 2.269: 91.8884% ( 252) 00:13:49.803 2.269 - 2.284: 92.9073% ( 103) 00:13:49.803 2.284 - 2.298: 93.5107% ( 61) 00:13:49.803 2.298 - 2.313: 94.0548% ( 55) 00:13:49.803 2.313 - 2.327: 94.6879% ( 64) 00:13:49.803 2.327 - 2.342: 95.2616% ( 58) 00:13:49.803 2.342 - 2.356: 95.4991% ( 24) 00:13:49.803 2.356 - 2.371: 95.6771% ( 18) 00:13:49.803 2.371 - 2.385: 95.9145% ( 24) 00:13:49.803 2.385 - 2.400: 96.0926% ( 18) 00:13:49.803 2.400 - 2.415: 96.3201% ( 23) 00:13:49.803 2.415 - 2.429: 96.5476% ( 23) 00:13:49.803 2.429 - 2.444: 96.7949% ( 25) 00:13:49.803 2.444 - 2.458: 96.9928% ( 20) 00:13:49.803 2.458 - 2.473: 97.1708% ( 18) 00:13:49.803 2.473 - 2.487: 97.2994% ( 13) 00:13:49.803 2.487 - 2.502: 97.5665% ( 27) 00:13:49.803 2.502 - 2.516: 97.6852% ( 12) 00:13:49.803 2.516 - 2.531: 97.9622% ( 28) 00:13:49.803 2.531 - 2.545: 98.1798% ( 22) 00:13:49.803 2.545 - 2.560: 98.2590% ( 8) 00:13:49.803 2.560 - 2.575: 98.2887% ( 3) 00:13:49.803 2.575 - 2.589: 98.3183% ( 3) 00:13:49.803 2.589 - 2.604: 98.3381% ( 2) 00:13:49.803 2.604 - 2.618: 98.3579% ( 2) 00:13:49.803 2.618 - 2.633: 98.3678% ( 1) 00:13:49.803 2.647 - 2.662: 98.3777% ( 1) 00:13:49.803 2.691 - 2.705: 98.3876% ( 1) 00:13:49.803 2.880 - 2.895: 98.3975% ( 1) 00:13:49.803 2.924 - 2.938: 98.4074% ( 1) 00:13:49.803 3.724 - 3.753: 98.4173% ( 1) 00:13:49.803 3.753 - 3.782: 98.4370% ( 2) 00:13:49.804 3.782 - 3.811: 98.4469% ( 1) 00:13:49.804 3.869 - 3.898: 98.4667% ( 2) 00:13:49.804 3.898 - 3.927: 98.4865% ( 2) 00:13:49.804 3.956 - 3.985: 98.5063% ( 2) 00:13:49.804 3.985 - 4.015: 98.5162% ( 1) 00:13:49.804 4.015 - 4.044: 98.5656% ( 5) 00:13:49.804 4.102 - 4.131: 98.5755% ( 1) 00:13:49.804 4.131 - 4.160: 98.5953% ( 2) 00:13:49.804 4.160 - 4.189: 98.6052% ( 1) 00:13:49.804 4.189 - 4.218: 98.6151% ( 1) 00:13:49.804 4.247 - 4.276: 98.6250% ( 1) 00:13:49.804 4.276 - 4.305: 98.6349% ( 1) 00:13:49.804 4.335 - 4.364: 98.6448% ( 1) 00:13:49.804 4.596 - 4.625: 98.6547% ( 1) 00:13:49.804 4.800 - 4.829: 98.6646% ( 1) 00:13:49.804 4.829 - 4.858: 98.6744% ( 1) 00:13:49.804 4.858 - 4.887: 98.6843% ( 1) 00:13:49.804 5.033 - 5.062: 98.6942% ( 1) 00:13:49.804 7.622 - 7.680: 98.7041% ( 1) 00:13:49.804 7.913 - 7.971: 98.7140% ( 1) 00:13:49.804 8.087 - 8.145: 98.7239% ( 1) 00:13:49.804 8.145 - 8.204: 98.7338% ( 1) 00:13:49.804 8.495 - 8.553: 98.7437% ( 1) 00:13:49.804 8.553 - 8.611: 98.7536% ( 1) 00:13:49.804 8.611 - 8.669: 98.7635% ( 1) 00:13:49.804 8.727 - 8.785: 98.7734% ( 1) 00:13:49.804 8.785 - 8.844: 98.7833% ( 1) 00:13:49.804 8.844 - 8.902: 98.7932% ( 1) 00:13:49.804 8.902 - 8.960: 98.8030% ( 1) 00:13:49.804 9.076 - 9.135: 98.8228% ( 2) 00:13:49.804 9.542 - 9.600: 98.8327% ( 1) 00:13:49.804 9.891 - 9.949: 98.8426% ( 1) 00:13:49.804 10.065 - 10.124: 98.8525% ( 1) 00:13:49.804 10.182 - 10.240: 98.8723% ( 2) 00:13:49.804 10.415 - 10.473: 98.8822% ( 1) 00:13:49.804 10.473 - 10.531: 98.8921% ( 1) 00:13:49.804 10.589 - 10.647: 98.9020% ( 1) 00:13:49.804 10.647 - 10.705: 98.9119% ( 1) 00:13:49.804 11.811 - 11.869: 98.9218% ( 1) 00:13:49.804 12.451 - 12.509: 98.9316% ( 1) 00:13:49.804 12.800 - 12.858: 98.9415% ( 1) 00:13:49.804 12.975 - 13.033: 98.9514% ( 1) 00:13:49.804 13.207 - 13.265: 98.9613% ( 1) 00:13:49.804 13.324 - 13.382: 98.9712% ( 1) 00:13:49.804 13.382 - 13.440: 98.9811% ( 1) 00:13:49.804 13.440 - 13.498: 99.0009% ( 2) 00:13:49.804 13.556 - 13.615: 99.0207% ( 2) 00:13:49.804 13.673 - 13.731: 99.0306% ( 1) 00:13:49.804 13.789 - 13.847: 99.0405% ( 1) 00:13:49.804 13.905 - 13.964: 99.0701% ( 3) 00:13:49.804 14.313 - 14.371: 99.0800% ( 1) 00:13:49.804 14.429 - 14.487: 99.0899% ( 1) 00:13:49.804 14.778 - 14.836: 99.0998% ( 1) 00:13:49.804 14.836 - 14.895: 99.1097% ( 1) 00:13:49.804 15.127 - 15.244: 99.1196% ( 1) 00:13:49.804 15.360 - 15.476: 99.1295% ( 1) 00:13:49.804 15.593 - 15.709: 99.1394% ( 1) 00:13:49.804 15.709 - 15.825: 99.1493% ( 1) 00:13:49.804 17.222 - 17.338: 99.1592% ( 1) 00:13:49.804 20.015 - 20.131: 99.1691% ( 1) 00:13:49.804 40.495 - 40.727: 99.1789% ( 1) 00:13:49.804 3038.487 - 3053.382: 99.1987% ( 2) 00:13:49.804 3053.382 - 3068.276: 99.2086% ( 1) 00:13:49.804 3902.371 - 3932.160: 99.2185% ( 1) 00:13:49.804 3932.160 - 3961.949: 99.2284% ( 1) 00:13:49.804 3961.949 - 3991.738: 99.3372% ( 11) 00:13:49.804 3991.738 - 4021.527: 99.8318% ( 50) 00:13:49.804 4021.527 - 4051.316: 99.9604% ( 13) 00:13:49.804 4051.316 - 4081.105: 99.9703% ( 1) 00:13:49.804 4587.520 - 4617.309: 99.9802% ( 1) 00:13:49.804 7000.436 - 7030.225: 100.0000% ( 2) 00:13:49.804 00:13:49.804 03:57:24 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:49.804 03:57:24 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:49.804 03:57:24 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:49.804 03:57:24 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:49.804 03:57:24 -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:49.804 [2024-11-08 03:57:24.886253] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:13:49.804 [ 00:13:49.804 { 00:13:49.804 "allow_any_host": true, 00:13:49.804 "hosts": [], 00:13:49.804 "listen_addresses": [], 00:13:49.804 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:49.804 "subtype": "Discovery" 00:13:49.804 }, 00:13:49.804 { 00:13:49.804 "allow_any_host": true, 00:13:49.804 "hosts": [], 00:13:49.804 "listen_addresses": [ 00:13:49.804 { 00:13:49.804 "adrfam": "IPv4", 00:13:49.804 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:49.804 "transport": "VFIOUSER", 00:13:49.804 "trsvcid": "0", 00:13:49.804 "trtype": "VFIOUSER" 00:13:49.804 } 00:13:49.804 ], 00:13:49.804 "max_cntlid": 65519, 00:13:49.804 "max_namespaces": 32, 00:13:49.804 "min_cntlid": 1, 00:13:49.804 "model_number": "SPDK bdev Controller", 00:13:49.804 "namespaces": [ 00:13:49.804 { 00:13:49.804 "bdev_name": "Malloc1", 00:13:49.804 "name": "Malloc1", 00:13:49.804 "nguid": "A944AA8C2A6B4291BE2F8CCD4840F1C1", 00:13:49.804 "nsid": 1, 00:13:49.804 "uuid": "a944aa8c-2a6b-4291-be2f-8ccd4840f1c1" 00:13:49.804 } 00:13:49.804 ], 00:13:49.804 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:49.804 "serial_number": "SPDK1", 00:13:49.804 "subtype": "NVMe" 00:13:49.804 }, 00:13:49.804 { 00:13:49.804 "allow_any_host": true, 00:13:49.804 "hosts": [], 00:13:49.804 "listen_addresses": [ 00:13:49.804 { 00:13:49.804 "adrfam": "IPv4", 00:13:49.804 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:49.804 "transport": "VFIOUSER", 00:13:49.804 "trsvcid": "0", 00:13:49.804 "trtype": "VFIOUSER" 00:13:49.804 } 00:13:49.804 ], 00:13:49.804 "max_cntlid": 65519, 00:13:49.804 "max_namespaces": 32, 00:13:49.804 "min_cntlid": 1, 00:13:49.804 "model_number": "SPDK bdev Controller", 00:13:49.804 "namespaces": [ 00:13:49.804 { 00:13:49.804 "bdev_name": "Malloc2", 00:13:49.804 "name": "Malloc2", 00:13:49.804 "nguid": "9872D21EF98B4AC8B0C07A8DB1ACC6E0", 00:13:49.804 "nsid": 1, 00:13:49.804 "uuid": "9872d21e-f98b-4ac8-b0c0-7a8db1acc6e0" 00:13:49.804 } 00:13:49.804 ], 00:13:49.804 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:49.804 "serial_number": "SPDK2", 00:13:49.804 "subtype": "NVMe" 00:13:49.804 } 00:13:49.804 ] 00:13:49.804 03:57:24 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:49.804 03:57:24 -- target/nvmf_vfio_user.sh@34 -- # aerpid=71194 00:13:49.804 03:57:24 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:49.804 03:57:24 -- common/autotest_common.sh@1254 -- # local i=0 00:13:49.804 03:57:24 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:49.804 03:57:24 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:13:49.804 03:57:24 -- common/autotest_common.sh@1257 -- # i=1 00:13:49.804 03:57:24 -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:49.804 03:57:24 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:13:50.063 03:57:25 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:50.063 03:57:25 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:13:50.063 03:57:25 -- common/autotest_common.sh@1257 -- # i=2 00:13:50.063 03:57:25 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:13:50.063 03:57:25 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:50.063 03:57:25 -- common/autotest_common.sh@1256 -- # '[' 2 -lt 200 ']' 00:13:50.063 03:57:25 -- common/autotest_common.sh@1257 -- # i=3 00:13:50.063 03:57:25 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:13:50.322 03:57:25 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:50.322 03:57:25 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:50.322 03:57:25 -- common/autotest_common.sh@1265 -- # return 0 00:13:50.322 03:57:25 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:50.322 03:57:25 -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:50.581 Malloc3 00:13:50.581 03:57:25 -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:50.840 03:57:25 -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:50.840 Asynchronous Event Request test 00:13:50.840 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:50.840 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:50.840 Registering asynchronous event callbacks... 00:13:50.840 Starting namespace attribute notice tests for all controllers... 00:13:50.840 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:50.840 aer_cb - Changed Namespace 00:13:50.840 Cleaning up... 00:13:51.099 [ 00:13:51.099 { 00:13:51.099 "allow_any_host": true, 00:13:51.099 "hosts": [], 00:13:51.099 "listen_addresses": [], 00:13:51.099 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:51.099 "subtype": "Discovery" 00:13:51.099 }, 00:13:51.099 { 00:13:51.099 "allow_any_host": true, 00:13:51.099 "hosts": [], 00:13:51.099 "listen_addresses": [ 00:13:51.099 { 00:13:51.099 "adrfam": "IPv4", 00:13:51.099 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:51.099 "transport": "VFIOUSER", 00:13:51.099 "trsvcid": "0", 00:13:51.099 "trtype": "VFIOUSER" 00:13:51.099 } 00:13:51.099 ], 00:13:51.099 "max_cntlid": 65519, 00:13:51.099 "max_namespaces": 32, 00:13:51.099 "min_cntlid": 1, 00:13:51.099 "model_number": "SPDK bdev Controller", 00:13:51.099 "namespaces": [ 00:13:51.099 { 00:13:51.099 "bdev_name": "Malloc1", 00:13:51.099 "name": "Malloc1", 00:13:51.099 "nguid": "A944AA8C2A6B4291BE2F8CCD4840F1C1", 00:13:51.099 "nsid": 1, 00:13:51.099 "uuid": "a944aa8c-2a6b-4291-be2f-8ccd4840f1c1" 00:13:51.099 }, 00:13:51.099 { 00:13:51.099 "bdev_name": "Malloc3", 00:13:51.099 "name": "Malloc3", 00:13:51.099 "nguid": "E744DEAC3AF44D3AB9A4679BC19C221A", 00:13:51.099 "nsid": 2, 00:13:51.099 "uuid": "e744deac-3af4-4d3a-b9a4-679bc19c221a" 00:13:51.099 } 00:13:51.099 ], 00:13:51.099 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:51.099 "serial_number": "SPDK1", 00:13:51.099 "subtype": "NVMe" 00:13:51.099 }, 00:13:51.099 { 00:13:51.099 "allow_any_host": true, 00:13:51.099 "hosts": [], 00:13:51.099 "listen_addresses": [ 00:13:51.099 { 00:13:51.099 "adrfam": "IPv4", 00:13:51.099 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:51.099 "transport": "VFIOUSER", 00:13:51.099 "trsvcid": "0", 00:13:51.099 "trtype": "VFIOUSER" 00:13:51.099 } 00:13:51.099 ], 00:13:51.099 "max_cntlid": 65519, 00:13:51.099 "max_namespaces": 32, 00:13:51.099 "min_cntlid": 1, 00:13:51.099 "model_number": "SPDK bdev Controller", 00:13:51.099 "namespaces": [ 00:13:51.099 { 00:13:51.099 "bdev_name": "Malloc2", 00:13:51.099 "name": "Malloc2", 00:13:51.099 "nguid": "9872D21EF98B4AC8B0C07A8DB1ACC6E0", 00:13:51.099 "nsid": 1, 00:13:51.099 "uuid": "9872d21e-f98b-4ac8-b0c0-7a8db1acc6e0" 00:13:51.099 } 00:13:51.099 ], 00:13:51.099 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:51.099 "serial_number": "SPDK2", 00:13:51.099 "subtype": "NVMe" 00:13:51.099 } 00:13:51.099 ] 00:13:51.099 03:57:26 -- target/nvmf_vfio_user.sh@44 -- # wait 71194 00:13:51.100 03:57:26 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:51.100 03:57:26 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:51.100 03:57:26 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:51.100 03:57:26 -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:51.100 [2024-11-08 03:57:26.110763] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:51.100 [2024-11-08 03:57:26.110812] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71232 ] 00:13:51.360 [2024-11-08 03:57:26.244619] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:51.360 [2024-11-08 03:57:26.257818] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:51.360 [2024-11-08 03:57:26.257864] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fe5ef4ed000 00:13:51.360 [2024-11-08 03:57:26.258802] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:51.360 [2024-11-08 03:57:26.259803] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:51.360 [2024-11-08 03:57:26.260807] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:51.360 [2024-11-08 03:57:26.261814] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:51.360 [2024-11-08 03:57:26.262824] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:51.360 [2024-11-08 03:57:26.263830] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:51.360 [2024-11-08 03:57:26.264831] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:51.360 [2024-11-08 03:57:26.265846] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:51.360 [2024-11-08 03:57:26.266851] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:51.360 [2024-11-08 03:57:26.266876] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fe5eebda000 00:13:51.360 [2024-11-08 03:57:26.268033] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:51.360 [2024-11-08 03:57:26.283043] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:51.360 [2024-11-08 03:57:26.283106] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:13:51.360 [2024-11-08 03:57:26.285225] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:51.361 [2024-11-08 03:57:26.285313] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:51.361 [2024-11-08 03:57:26.285439] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:13:51.361 [2024-11-08 03:57:26.285495] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:13:51.361 [2024-11-08 03:57:26.285503] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:13:51.361 [2024-11-08 03:57:26.286210] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:51.361 [2024-11-08 03:57:26.286234] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:13:51.361 [2024-11-08 03:57:26.286255] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:13:51.361 [2024-11-08 03:57:26.287215] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:51.361 [2024-11-08 03:57:26.287241] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:13:51.361 [2024-11-08 03:57:26.287257] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:13:51.361 [2024-11-08 03:57:26.288210] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:51.361 [2024-11-08 03:57:26.288234] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:51.361 [2024-11-08 03:57:26.289216] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:51.361 [2024-11-08 03:57:26.289239] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:13:51.361 [2024-11-08 03:57:26.289247] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:13:51.361 [2024-11-08 03:57:26.289256] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:51.361 [2024-11-08 03:57:26.289362] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:13:51.361 [2024-11-08 03:57:26.289369] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:51.361 [2024-11-08 03:57:26.289374] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:51.361 [2024-11-08 03:57:26.290225] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:51.361 [2024-11-08 03:57:26.291219] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:51.361 [2024-11-08 03:57:26.292224] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:51.361 [2024-11-08 03:57:26.293263] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:51.361 [2024-11-08 03:57:26.294229] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:51.361 [2024-11-08 03:57:26.294253] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:51.361 [2024-11-08 03:57:26.294261] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:13:51.361 [2024-11-08 03:57:26.294283] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:13:51.361 [2024-11-08 03:57:26.294301] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:13:51.361 [2024-11-08 03:57:26.294319] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:51.361 [2024-11-08 03:57:26.294325] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:51.361 [2024-11-08 03:57:26.294343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:51.361 [2024-11-08 03:57:26.302437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:51.361 [2024-11-08 03:57:26.302463] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:13:51.361 [2024-11-08 03:57:26.302470] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:13:51.361 [2024-11-08 03:57:26.302476] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:13:51.361 [2024-11-08 03:57:26.302485] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:51.361 [2024-11-08 03:57:26.302490] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:13:51.361 [2024-11-08 03:57:26.302495] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:13:51.361 [2024-11-08 03:57:26.302500] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:13:51.361 [2024-11-08 03:57:26.302517] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:13:51.361 [2024-11-08 03:57:26.302531] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:51.361 [2024-11-08 03:57:26.310429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:51.361 [2024-11-08 03:57:26.310463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:51.361 [2024-11-08 03:57:26.310477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:51.361 [2024-11-08 03:57:26.310486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:51.361 [2024-11-08 03:57:26.310496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:51.361 [2024-11-08 03:57:26.310502] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:13:51.361 [2024-11-08 03:57:26.310515] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:51.361 [2024-11-08 03:57:26.310527] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:51.361 [2024-11-08 03:57:26.318432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:51.361 [2024-11-08 03:57:26.318461] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:13:51.361 [2024-11-08 03:57:26.318468] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:51.361 [2024-11-08 03:57:26.318478] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:13:51.361 [2024-11-08 03:57:26.318490] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:13:51.361 [2024-11-08 03:57:26.318502] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:51.361 [2024-11-08 03:57:26.326431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:51.361 [2024-11-08 03:57:26.326523] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:13:51.361 [2024-11-08 03:57:26.326538] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:13:51.361 [2024-11-08 03:57:26.326550] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:51.361 [2024-11-08 03:57:26.326557] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:51.361 [2024-11-08 03:57:26.326566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:51.361 [2024-11-08 03:57:26.334460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:51.361 [2024-11-08 03:57:26.334506] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:13:51.361 [2024-11-08 03:57:26.334520] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:13:51.361 [2024-11-08 03:57:26.334532] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:13:51.361 [2024-11-08 03:57:26.334541] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:51.361 [2024-11-08 03:57:26.334547] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:51.361 [2024-11-08 03:57:26.334554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:51.361 [2024-11-08 03:57:26.342429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:51.361 [2024-11-08 03:57:26.342463] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:51.361 [2024-11-08 03:57:26.342484] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:51.361 [2024-11-08 03:57:26.342495] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:51.361 [2024-11-08 03:57:26.342500] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:51.361 [2024-11-08 03:57:26.342508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:51.361 [2024-11-08 03:57:26.349458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:51.361 [2024-11-08 03:57:26.349489] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:51.361 [2024-11-08 03:57:26.349506] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:13:51.361 [2024-11-08 03:57:26.349530] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:13:51.361 [2024-11-08 03:57:26.349537] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:51.362 [2024-11-08 03:57:26.349543] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:13:51.362 [2024-11-08 03:57:26.349549] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:13:51.362 [2024-11-08 03:57:26.349554] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:13:51.362 [2024-11-08 03:57:26.349560] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:13:51.362 [2024-11-08 03:57:26.349587] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:51.362 [2024-11-08 03:57:26.357434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:51.362 [2024-11-08 03:57:26.357462] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:51.362 [2024-11-08 03:57:26.365433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:51.362 [2024-11-08 03:57:26.365460] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:51.362 [2024-11-08 03:57:26.373430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:51.362 [2024-11-08 03:57:26.373458] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:51.362 [2024-11-08 03:57:26.381427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:51.362 [2024-11-08 03:57:26.381480] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:51.362 [2024-11-08 03:57:26.381488] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:51.362 [2024-11-08 03:57:26.381493] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:51.362 [2024-11-08 03:57:26.381497] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:51.362 [2024-11-08 03:57:26.381505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:51.362 [2024-11-08 03:57:26.381515] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:51.362 [2024-11-08 03:57:26.381520] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:51.362 [2024-11-08 03:57:26.381527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:51.362 [2024-11-08 03:57:26.381535] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:51.362 [2024-11-08 03:57:26.381540] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:51.362 [2024-11-08 03:57:26.381546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:51.362 [2024-11-08 03:57:26.381556] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:51.362 [2024-11-08 03:57:26.381561] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:51.362 [2024-11-08 03:57:26.381567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:51.362 [2024-11-08 03:57:26.389434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:51.362 [2024-11-08 03:57:26.389510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:51.362 [2024-11-08 03:57:26.389524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:51.362 [2024-11-08 03:57:26.389533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:51.362 ===================================================== 00:13:51.362 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:51.362 ===================================================== 00:13:51.362 Controller Capabilities/Features 00:13:51.362 ================================ 00:13:51.362 Vendor ID: 4e58 00:13:51.362 Subsystem Vendor ID: 4e58 00:13:51.362 Serial Number: SPDK2 00:13:51.362 Model Number: SPDK bdev Controller 00:13:51.362 Firmware Version: 24.01.1 00:13:51.362 Recommended Arb Burst: 6 00:13:51.362 IEEE OUI Identifier: 8d 6b 50 00:13:51.362 Multi-path I/O 00:13:51.362 May have multiple subsystem ports: Yes 00:13:51.362 May have multiple controllers: Yes 00:13:51.362 Associated with SR-IOV VF: No 00:13:51.362 Max Data Transfer Size: 131072 00:13:51.362 Max Number of Namespaces: 32 00:13:51.362 Max Number of I/O Queues: 127 00:13:51.362 NVMe Specification Version (VS): 1.3 00:13:51.362 NVMe Specification Version (Identify): 1.3 00:13:51.362 Maximum Queue Entries: 256 00:13:51.362 Contiguous Queues Required: Yes 00:13:51.362 Arbitration Mechanisms Supported 00:13:51.362 Weighted Round Robin: Not Supported 00:13:51.362 Vendor Specific: Not Supported 00:13:51.362 Reset Timeout: 15000 ms 00:13:51.362 Doorbell Stride: 4 bytes 00:13:51.362 NVM Subsystem Reset: Not Supported 00:13:51.362 Command Sets Supported 00:13:51.362 NVM Command Set: Supported 00:13:51.362 Boot Partition: Not Supported 00:13:51.362 Memory Page Size Minimum: 4096 bytes 00:13:51.362 Memory Page Size Maximum: 4096 bytes 00:13:51.362 Persistent Memory Region: Not Supported 00:13:51.362 Optional Asynchronous Events Supported 00:13:51.362 Namespace Attribute Notices: Supported 00:13:51.362 Firmware Activation Notices: Not Supported 00:13:51.362 ANA Change Notices: Not Supported 00:13:51.362 PLE Aggregate Log Change Notices: Not Supported 00:13:51.362 LBA Status Info Alert Notices: Not Supported 00:13:51.362 EGE Aggregate Log Change Notices: Not Supported 00:13:51.362 Normal NVM Subsystem Shutdown event: Not Supported 00:13:51.362 Zone Descriptor Change Notices: Not Supported 00:13:51.362 Discovery Log Change Notices: Not Supported 00:13:51.362 Controller Attributes 00:13:51.362 128-bit Host Identifier: Supported 00:13:51.362 Non-Operational Permissive Mode: Not Supported 00:13:51.362 NVM Sets: Not Supported 00:13:51.362 Read Recovery Levels: Not Supported 00:13:51.362 Endurance Groups: Not Supported 00:13:51.362 Predictable Latency Mode: Not Supported 00:13:51.362 Traffic Based Keep ALive: Not Supported 00:13:51.362 Namespace Granularity: Not Supported 00:13:51.362 SQ Associations: Not Supported 00:13:51.362 UUID List: Not Supported 00:13:51.362 Multi-Domain Subsystem: Not Supported 00:13:51.362 Fixed Capacity Management: Not Supported 00:13:51.362 Variable Capacity Management: Not Supported 00:13:51.362 Delete Endurance Group: Not Supported 00:13:51.362 Delete NVM Set: Not Supported 00:13:51.362 Extended LBA Formats Supported: Not Supported 00:13:51.362 Flexible Data Placement Supported: Not Supported 00:13:51.362 00:13:51.362 Controller Memory Buffer Support 00:13:51.362 ================================ 00:13:51.362 Supported: No 00:13:51.362 00:13:51.362 Persistent Memory Region Support 00:13:51.362 ================================ 00:13:51.362 Supported: No 00:13:51.362 00:13:51.362 Admin Command Set Attributes 00:13:51.362 ============================ 00:13:51.362 Security Send/Receive: Not Supported 00:13:51.362 Format NVM: Not Supported 00:13:51.362 Firmware Activate/Download: Not Supported 00:13:51.362 Namespace Management: Not Supported 00:13:51.362 Device Self-Test: Not Supported 00:13:51.362 Directives: Not Supported 00:13:51.362 NVMe-MI: Not Supported 00:13:51.362 Virtualization Management: Not Supported 00:13:51.362 Doorbell Buffer Config: Not Supported 00:13:51.362 Get LBA Status Capability: Not Supported 00:13:51.362 Command & Feature Lockdown Capability: Not Supported 00:13:51.362 Abort Command Limit: 4 00:13:51.362 Async Event Request Limit: 4 00:13:51.362 Number of Firmware Slots: N/A 00:13:51.362 Firmware Slot 1 Read-Only: N/A 00:13:51.362 Firmware Activation Without Reset: N/A 00:13:51.362 Multiple Update Detection Support: N/A 00:13:51.362 Firmware Update Granularity: No Information Provided 00:13:51.362 Per-Namespace SMART Log: No 00:13:51.362 Asymmetric Namespace Access Log Page: Not Supported 00:13:51.362 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:51.362 Command Effects Log Page: Supported 00:13:51.362 Get Log Page Extended Data: Supported 00:13:51.362 Telemetry Log Pages: Not Supported 00:13:51.362 Persistent Event Log Pages: Not Supported 00:13:51.362 Supported Log Pages Log Page: May Support 00:13:51.362 Commands Supported & Effects Log Page: Not Supported 00:13:51.362 Feature Identifiers & Effects Log Page:May Support 00:13:51.362 NVMe-MI Commands & Effects Log Page: May Support 00:13:51.362 Data Area 4 for Telemetry Log: Not Supported 00:13:51.362 Error Log Page Entries Supported: 128 00:13:51.362 Keep Alive: Supported 00:13:51.362 Keep Alive Granularity: 10000 ms 00:13:51.362 00:13:51.362 NVM Command Set Attributes 00:13:51.362 ========================== 00:13:51.362 Submission Queue Entry Size 00:13:51.362 Max: 64 00:13:51.362 Min: 64 00:13:51.362 Completion Queue Entry Size 00:13:51.362 Max: 16 00:13:51.362 Min: 16 00:13:51.362 Number of Namespaces: 32 00:13:51.362 Compare Command: Supported 00:13:51.362 Write Uncorrectable Command: Not Supported 00:13:51.362 Dataset Management Command: Supported 00:13:51.362 Write Zeroes Command: Supported 00:13:51.362 Set Features Save Field: Not Supported 00:13:51.362 Reservations: Not Supported 00:13:51.362 Timestamp: Not Supported 00:13:51.362 Copy: Supported 00:13:51.362 Volatile Write Cache: Present 00:13:51.362 Atomic Write Unit (Normal): 1 00:13:51.363 Atomic Write Unit (PFail): 1 00:13:51.363 Atomic Compare & Write Unit: 1 00:13:51.363 Fused Compare & Write: Supported 00:13:51.363 Scatter-Gather List 00:13:51.363 SGL Command Set: Supported (Dword aligned) 00:13:51.363 SGL Keyed: Not Supported 00:13:51.363 SGL Bit Bucket Descriptor: Not Supported 00:13:51.363 SGL Metadata Pointer: Not Supported 00:13:51.363 Oversized SGL: Not Supported 00:13:51.363 SGL Metadata Address: Not Supported 00:13:51.363 SGL Offset: Not Supported 00:13:51.363 Transport SGL Data Block: Not Supported 00:13:51.363 Replay Protected Memory Block: Not Supported 00:13:51.363 00:13:51.363 Firmware Slot Information 00:13:51.363 ========================= 00:13:51.363 Active slot: 1 00:13:51.363 Slot 1 Firmware Revision: 24.01.1 00:13:51.363 00:13:51.363 00:13:51.363 Commands Supported and Effects 00:13:51.363 ============================== 00:13:51.363 Admin Commands 00:13:51.363 -------------- 00:13:51.363 Get Log Page (02h): Supported 00:13:51.363 Identify (06h): Supported 00:13:51.363 Abort (08h): Supported 00:13:51.363 Set Features (09h): Supported 00:13:51.363 Get Features (0Ah): Supported 00:13:51.363 Asynchronous Event Request (0Ch): Supported 00:13:51.363 Keep Alive (18h): Supported 00:13:51.363 I/O Commands 00:13:51.363 ------------ 00:13:51.363 Flush (00h): Supported LBA-Change 00:13:51.363 Write (01h): Supported LBA-Change 00:13:51.363 Read (02h): Supported 00:13:51.363 Compare (05h): Supported 00:13:51.363 Write Zeroes (08h): Supported LBA-Change 00:13:51.363 Dataset Management (09h): Supported LBA-Change 00:13:51.363 Copy (19h): Supported LBA-Change 00:13:51.363 Unknown (79h): Supported LBA-Change 00:13:51.363 Unknown (7Ah): Supported 00:13:51.363 00:13:51.363 Error Log 00:13:51.363 ========= 00:13:51.363 00:13:51.363 Arbitration 00:13:51.363 =========== 00:13:51.363 Arbitration Burst: 1 00:13:51.363 00:13:51.363 Power Management 00:13:51.363 ================ 00:13:51.363 Number of Power States: 1 00:13:51.363 Current Power State: Power State #0 00:13:51.363 Power State #0: 00:13:51.363 Max Power: 0.00 W 00:13:51.363 Non-Operational State: Operational 00:13:51.363 Entry Latency: Not Reported 00:13:51.363 Exit Latency: Not Reported 00:13:51.363 Relative Read Throughput: 0 00:13:51.363 Relative Read Latency: 0 00:13:51.363 Relative Write Throughput: 0 00:13:51.363 Relative Write Latency: 0 00:13:51.363 Idle Power: Not Reported 00:13:51.363 Active Power: Not Reported 00:13:51.363 Non-Operational Permissive Mode: Not Supported 00:13:51.363 00:13:51.363 Health Information 00:13:51.363 ================== 00:13:51.363 Critical Warnings: 00:13:51.363 Available Spare Space: OK 00:13:51.363 Temperature: OK 00:13:51.363 Device Reliability: OK 00:13:51.363 Read Only: No 00:13:51.363 Volatile Memory Backup: OK 00:13:51.363 Current Temperature: 0 Kelvin[2024-11-08 03:57:26.389688] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:51.363 [2024-11-08 03:57:26.397432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:51.363 [2024-11-08 03:57:26.397516] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:13:51.363 [2024-11-08 03:57:26.397535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.363 [2024-11-08 03:57:26.397543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.363 [2024-11-08 03:57:26.397551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.363 [2024-11-08 03:57:26.397559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.363 [2024-11-08 03:57:26.397661] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:51.363 [2024-11-08 03:57:26.397683] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:51.363 [2024-11-08 03:57:26.398714] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:13:51.363 [2024-11-08 03:57:26.398735] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:13:51.363 [2024-11-08 03:57:26.399658] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:51.363 [2024-11-08 03:57:26.399687] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:13:51.363 [2024-11-08 03:57:26.399922] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:51.363 [2024-11-08 03:57:26.402458] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:51.363 (-273 Celsius) 00:13:51.363 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:51.363 Available Spare: 0% 00:13:51.363 Available Spare Threshold: 0% 00:13:51.363 Life Percentage Used: 0% 00:13:51.363 Data Units Read: 0 00:13:51.363 Data Units Written: 0 00:13:51.363 Host Read Commands: 0 00:13:51.363 Host Write Commands: 0 00:13:51.363 Controller Busy Time: 0 minutes 00:13:51.363 Power Cycles: 0 00:13:51.363 Power On Hours: 0 hours 00:13:51.363 Unsafe Shutdowns: 0 00:13:51.363 Unrecoverable Media Errors: 0 00:13:51.363 Lifetime Error Log Entries: 0 00:13:51.363 Warning Temperature Time: 0 minutes 00:13:51.363 Critical Temperature Time: 0 minutes 00:13:51.363 00:13:51.363 Number of Queues 00:13:51.363 ================ 00:13:51.363 Number of I/O Submission Queues: 127 00:13:51.363 Number of I/O Completion Queues: 127 00:13:51.363 00:13:51.363 Active Namespaces 00:13:51.363 ================= 00:13:51.363 Namespace ID:1 00:13:51.363 Error Recovery Timeout: Unlimited 00:13:51.363 Command Set Identifier: NVM (00h) 00:13:51.363 Deallocate: Supported 00:13:51.363 Deallocated/Unwritten Error: Not Supported 00:13:51.363 Deallocated Read Value: Unknown 00:13:51.363 Deallocate in Write Zeroes: Not Supported 00:13:51.363 Deallocated Guard Field: 0xFFFF 00:13:51.363 Flush: Supported 00:13:51.363 Reservation: Supported 00:13:51.363 Namespace Sharing Capabilities: Multiple Controllers 00:13:51.363 Size (in LBAs): 131072 (0GiB) 00:13:51.363 Capacity (in LBAs): 131072 (0GiB) 00:13:51.363 Utilization (in LBAs): 131072 (0GiB) 00:13:51.363 NGUID: 9872D21EF98B4AC8B0C07A8DB1ACC6E0 00:13:51.363 UUID: 9872d21e-f98b-4ac8-b0c0-7a8db1acc6e0 00:13:51.363 Thin Provisioning: Not Supported 00:13:51.363 Per-NS Atomic Units: Yes 00:13:51.363 Atomic Boundary Size (Normal): 0 00:13:51.363 Atomic Boundary Size (PFail): 0 00:13:51.363 Atomic Boundary Offset: 0 00:13:51.363 Maximum Single Source Range Length: 65535 00:13:51.363 Maximum Copy Length: 65535 00:13:51.363 Maximum Source Range Count: 1 00:13:51.363 NGUID/EUI64 Never Reused: No 00:13:51.363 Namespace Write Protected: No 00:13:51.363 Number of LBA Formats: 1 00:13:51.363 Current LBA Format: LBA Format #00 00:13:51.363 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:51.363 00:13:51.363 03:57:26 -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:57.930 Initializing NVMe Controllers 00:13:57.930 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:57.930 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:57.930 Initialization complete. Launching workers. 00:13:57.930 ======================================================== 00:13:57.930 Latency(us) 00:13:57.930 Device Information : IOPS MiB/s Average min max 00:13:57.930 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 36640.88 143.13 3492.57 1135.08 9828.71 00:13:57.930 ======================================================== 00:13:57.930 Total : 36640.88 143.13 3492.57 1135.08 9828.71 00:13:57.930 00:13:57.930 03:57:31 -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:02.129 Initializing NVMe Controllers 00:14:02.129 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:02.129 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:02.129 Initialization complete. Launching workers. 00:14:02.129 ======================================================== 00:14:02.129 Latency(us) 00:14:02.129 Device Information : IOPS MiB/s Average min max 00:14:02.129 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35818.59 139.92 3573.27 1146.75 10363.44 00:14:02.129 ======================================================== 00:14:02.129 Total : 35818.59 139.92 3573.27 1146.75 10363.44 00:14:02.129 00:14:02.129 03:57:37 -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:08.692 Initializing NVMe Controllers 00:14:08.692 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:08.692 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:08.692 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:08.692 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:08.692 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:08.692 Initialization complete. Launching workers. 00:14:08.692 Starting thread on core 2 00:14:08.692 Starting thread on core 3 00:14:08.692 Starting thread on core 1 00:14:08.692 03:57:42 -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:11.227 Initializing NVMe Controllers 00:14:11.227 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:11.227 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:11.227 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:11.227 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:11.227 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:11.227 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:11.227 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:14:11.227 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:11.227 Initialization complete. Launching workers. 00:14:11.227 Starting thread on core 1 with urgent priority queue 00:14:11.227 Starting thread on core 2 with urgent priority queue 00:14:11.227 Starting thread on core 3 with urgent priority queue 00:14:11.227 Starting thread on core 0 with urgent priority queue 00:14:11.227 SPDK bdev Controller (SPDK2 ) core 0: 3860.33 IO/s 25.90 secs/100000 ios 00:14:11.227 SPDK bdev Controller (SPDK2 ) core 1: 3292.67 IO/s 30.37 secs/100000 ios 00:14:11.227 SPDK bdev Controller (SPDK2 ) core 2: 3848.33 IO/s 25.99 secs/100000 ios 00:14:11.227 SPDK bdev Controller (SPDK2 ) core 3: 3152.00 IO/s 31.73 secs/100000 ios 00:14:11.227 ======================================================== 00:14:11.227 00:14:11.227 03:57:45 -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:11.227 Initializing NVMe Controllers 00:14:11.227 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:11.227 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:11.227 Namespace ID: 1 size: 0GB 00:14:11.227 Initialization complete. 00:14:11.227 INFO: using host memory buffer for IO 00:14:11.227 Hello world! 00:14:11.486 03:57:46 -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:12.863 Initializing NVMe Controllers 00:14:12.863 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:12.863 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:12.863 Initialization complete. Launching workers. 00:14:12.863 submit (in ns) avg, min, max = 8010.8, 3836.4, 5036959.1 00:14:12.863 complete (in ns) avg, min, max = 26428.5, 2202.7, 5051477.7 00:14:12.863 00:14:12.863 Submit histogram 00:14:12.863 ================ 00:14:12.863 Range in us Cumulative Count 00:14:12.863 3.811 - 3.840: 0.0169% ( 2) 00:14:12.863 3.840 - 3.869: 0.0678% ( 6) 00:14:12.863 3.869 - 3.898: 0.1186% ( 6) 00:14:12.863 3.898 - 3.927: 0.1949% ( 9) 00:14:12.863 3.927 - 3.956: 0.9066% ( 84) 00:14:12.863 3.956 - 3.985: 5.4313% ( 534) 00:14:12.863 3.985 - 4.015: 15.7346% ( 1216) 00:14:12.863 4.015 - 4.044: 29.2578% ( 1596) 00:14:12.863 4.044 - 4.073: 40.8405% ( 1367) 00:14:12.863 4.073 - 4.102: 53.2283% ( 1462) 00:14:12.863 4.102 - 4.131: 63.6248% ( 1227) 00:14:12.863 4.131 - 4.160: 70.6745% ( 832) 00:14:12.863 4.160 - 4.189: 75.2839% ( 544) 00:14:12.863 4.189 - 4.218: 78.3088% ( 357) 00:14:12.863 4.218 - 4.247: 80.5880% ( 269) 00:14:12.863 4.247 - 4.276: 82.5284% ( 229) 00:14:12.863 4.276 - 4.305: 84.1806% ( 195) 00:14:12.863 4.305 - 4.335: 85.6041% ( 168) 00:14:12.863 4.335 - 4.364: 87.3157% ( 202) 00:14:12.863 4.364 - 4.393: 89.1459% ( 216) 00:14:12.863 4.393 - 4.422: 91.1710% ( 239) 00:14:12.863 4.422 - 4.451: 93.0859% ( 226) 00:14:12.863 4.451 - 4.480: 94.4925% ( 166) 00:14:12.863 4.480 - 4.509: 95.5431% ( 124) 00:14:12.863 4.509 - 4.538: 96.3057% ( 90) 00:14:12.863 4.538 - 4.567: 96.7633% ( 54) 00:14:12.863 4.567 - 4.596: 97.0344% ( 32) 00:14:12.863 4.596 - 4.625: 97.2547% ( 26) 00:14:12.863 4.625 - 4.655: 97.3649% ( 13) 00:14:12.863 4.655 - 4.684: 97.4835% ( 14) 00:14:12.863 4.684 - 4.713: 97.5513% ( 8) 00:14:12.863 4.713 - 4.742: 97.6445% ( 11) 00:14:12.863 4.742 - 4.771: 97.6784% ( 4) 00:14:12.863 4.800 - 4.829: 97.6953% ( 2) 00:14:12.863 4.858 - 4.887: 97.7123% ( 2) 00:14:12.863 4.887 - 4.916: 97.7292% ( 2) 00:14:12.863 4.945 - 4.975: 97.7461% ( 2) 00:14:12.863 5.004 - 5.033: 97.7546% ( 1) 00:14:12.863 5.033 - 5.062: 97.7631% ( 1) 00:14:12.863 5.062 - 5.091: 97.7970% ( 4) 00:14:12.863 5.091 - 5.120: 97.8055% ( 1) 00:14:12.863 5.120 - 5.149: 97.8309% ( 3) 00:14:12.863 5.149 - 5.178: 97.8393% ( 1) 00:14:12.863 5.178 - 5.207: 97.8563% ( 2) 00:14:12.863 5.207 - 5.236: 97.8732% ( 2) 00:14:12.863 5.265 - 5.295: 97.8987% ( 3) 00:14:12.863 5.295 - 5.324: 97.9410% ( 5) 00:14:12.863 5.324 - 5.353: 97.9664% ( 3) 00:14:12.863 5.353 - 5.382: 97.9834% ( 2) 00:14:12.863 5.382 - 5.411: 98.0342% ( 6) 00:14:12.863 5.411 - 5.440: 98.0681% ( 4) 00:14:12.863 5.440 - 5.469: 98.1190% ( 6) 00:14:12.863 5.469 - 5.498: 98.1783% ( 7) 00:14:12.863 5.498 - 5.527: 98.2206% ( 5) 00:14:12.863 5.527 - 5.556: 98.2461% ( 3) 00:14:12.863 5.556 - 5.585: 98.3223% ( 9) 00:14:12.863 5.585 - 5.615: 98.3732% ( 6) 00:14:12.863 5.615 - 5.644: 98.4155% ( 5) 00:14:12.863 5.644 - 5.673: 98.4325% ( 2) 00:14:12.863 5.673 - 5.702: 98.4748% ( 5) 00:14:12.863 5.702 - 5.731: 98.5257% ( 6) 00:14:12.863 5.731 - 5.760: 98.5680% ( 5) 00:14:12.863 5.760 - 5.789: 98.5935% ( 3) 00:14:12.863 5.789 - 5.818: 98.6274% ( 4) 00:14:12.863 5.847 - 5.876: 98.6358% ( 1) 00:14:12.863 5.876 - 5.905: 98.6528% ( 2) 00:14:12.863 5.905 - 5.935: 98.6782% ( 3) 00:14:12.863 5.935 - 5.964: 98.6867% ( 1) 00:14:12.863 5.964 - 5.993: 98.6951% ( 1) 00:14:12.863 6.051 - 6.080: 98.7036% ( 1) 00:14:12.863 6.080 - 6.109: 98.7121% ( 1) 00:14:12.863 6.109 - 6.138: 98.7375% ( 3) 00:14:12.863 6.138 - 6.167: 98.7544% ( 2) 00:14:12.863 6.167 - 6.196: 98.7629% ( 1) 00:14:12.863 6.225 - 6.255: 98.7714% ( 1) 00:14:12.863 6.255 - 6.284: 98.7799% ( 1) 00:14:12.863 6.284 - 6.313: 98.7883% ( 1) 00:14:12.863 6.313 - 6.342: 98.7968% ( 1) 00:14:12.863 6.400 - 6.429: 98.8053% ( 1) 00:14:12.863 6.429 - 6.458: 98.8138% ( 1) 00:14:12.863 6.458 - 6.487: 98.8222% ( 1) 00:14:12.863 6.604 - 6.633: 98.8307% ( 1) 00:14:12.863 6.633 - 6.662: 98.8477% ( 2) 00:14:12.863 6.982 - 7.011: 98.8561% ( 1) 00:14:12.863 7.738 - 7.796: 98.8646% ( 1) 00:14:12.863 8.320 - 8.378: 98.8731% ( 1) 00:14:12.863 9.193 - 9.251: 98.8815% ( 1) 00:14:12.863 9.484 - 9.542: 98.8900% ( 1) 00:14:12.863 9.542 - 9.600: 98.8985% ( 1) 00:14:12.863 9.949 - 10.007: 98.9070% ( 1) 00:14:12.863 10.065 - 10.124: 98.9324% ( 3) 00:14:12.863 10.124 - 10.182: 98.9409% ( 1) 00:14:12.863 10.240 - 10.298: 98.9493% ( 1) 00:14:12.863 10.415 - 10.473: 98.9578% ( 1) 00:14:12.863 10.531 - 10.589: 98.9663% ( 1) 00:14:12.863 10.705 - 10.764: 98.9832% ( 2) 00:14:12.863 10.764 - 10.822: 98.9917% ( 1) 00:14:12.863 10.996 - 11.055: 99.0002% ( 1) 00:14:12.863 11.055 - 11.113: 99.0171% ( 2) 00:14:12.863 11.113 - 11.171: 99.0256% ( 1) 00:14:12.863 11.287 - 11.345: 99.0425% ( 2) 00:14:12.863 11.520 - 11.578: 99.0510% ( 1) 00:14:12.863 11.636 - 11.695: 99.0595% ( 1) 00:14:12.863 11.695 - 11.753: 99.0680% ( 1) 00:14:12.863 11.811 - 11.869: 99.0849% ( 2) 00:14:12.863 12.218 - 12.276: 99.0934% ( 1) 00:14:12.863 12.335 - 12.393: 99.1018% ( 1) 00:14:12.863 12.393 - 12.451: 99.1103% ( 1) 00:14:12.863 12.451 - 12.509: 99.1188% ( 1) 00:14:12.863 12.509 - 12.567: 99.1357% ( 2) 00:14:12.863 12.625 - 12.684: 99.1442% ( 1) 00:14:12.863 13.033 - 13.091: 99.1527% ( 1) 00:14:12.863 13.324 - 13.382: 99.1612% ( 1) 00:14:12.863 13.440 - 13.498: 99.1781% ( 2) 00:14:12.863 13.673 - 13.731: 99.1866% ( 1) 00:14:12.863 13.731 - 13.789: 99.1951% ( 1) 00:14:12.863 13.905 - 13.964: 99.2035% ( 1) 00:14:12.863 13.964 - 14.022: 99.2120% ( 1) 00:14:12.863 14.138 - 14.196: 99.2205% ( 1) 00:14:12.863 14.604 - 14.662: 99.2289% ( 1) 00:14:12.863 14.778 - 14.836: 99.2374% ( 1) 00:14:12.863 14.836 - 14.895: 99.2544% ( 2) 00:14:12.863 14.895 - 15.011: 99.2798% ( 3) 00:14:12.863 15.127 - 15.244: 99.3052% ( 3) 00:14:12.863 15.244 - 15.360: 99.3476% ( 5) 00:14:12.863 15.360 - 15.476: 99.3815% ( 4) 00:14:12.864 15.476 - 15.593: 99.3899% ( 1) 00:14:12.864 15.593 - 15.709: 99.4154% ( 3) 00:14:12.864 15.709 - 15.825: 99.4408% ( 3) 00:14:12.864 15.825 - 15.942: 99.4492% ( 1) 00:14:12.864 15.942 - 16.058: 99.4662% ( 2) 00:14:12.864 16.058 - 16.175: 99.4747% ( 1) 00:14:12.864 16.175 - 16.291: 99.4831% ( 1) 00:14:12.864 16.291 - 16.407: 99.5170% ( 4) 00:14:12.864 16.407 - 16.524: 99.5340% ( 2) 00:14:12.864 16.524 - 16.640: 99.5425% ( 1) 00:14:12.864 16.640 - 16.756: 99.5509% ( 1) 00:14:12.864 16.756 - 16.873: 99.5679% ( 2) 00:14:12.864 16.873 - 16.989: 99.5763% ( 1) 00:14:12.864 16.989 - 17.105: 99.5933% ( 2) 00:14:12.864 17.222 - 17.338: 99.6018% ( 1) 00:14:12.864 17.338 - 17.455: 99.6187% ( 2) 00:14:12.864 17.455 - 17.571: 99.6272% ( 1) 00:14:12.864 17.571 - 17.687: 99.6441% ( 2) 00:14:12.864 17.920 - 18.036: 99.6526% ( 1) 00:14:12.864 18.036 - 18.153: 99.6611% ( 1) 00:14:12.864 18.502 - 18.618: 99.6780% ( 2) 00:14:12.864 18.618 - 18.735: 99.6950% ( 2) 00:14:12.864 18.735 - 18.851: 99.7034% ( 1) 00:14:12.864 18.851 - 18.967: 99.7289% ( 3) 00:14:12.864 19.084 - 19.200: 99.7458% ( 2) 00:14:12.864 19.898 - 20.015: 99.7543% ( 1) 00:14:12.864 20.131 - 20.247: 99.7628% ( 1) 00:14:12.864 20.247 - 20.364: 99.7797% ( 2) 00:14:12.864 20.480 - 20.596: 99.7882% ( 1) 00:14:12.864 20.713 - 20.829: 99.8051% ( 2) 00:14:12.864 21.644 - 21.760: 99.8136% ( 1) 00:14:12.864 22.924 - 23.040: 99.8221% ( 1) 00:14:12.864 23.738 - 23.855: 99.8305% ( 1) 00:14:12.864 29.789 - 30.022: 99.8390% ( 1) 00:14:12.864 30.953 - 31.185: 99.8475% ( 1) 00:14:12.864 31.418 - 31.651: 99.8560% ( 1) 00:14:12.864 32.349 - 32.582: 99.8644% ( 1) 00:14:12.864 40.727 - 40.960: 99.8729% ( 1) 00:14:12.864 990.487 - 997.935: 99.8898% ( 2) 00:14:12.864 1027.724 - 1035.171: 99.8983% ( 1) 00:14:12.864 1035.171 - 1042.618: 99.9153% ( 2) 00:14:12.864 2010.764 - 2025.658: 99.9237% ( 1) 00:14:12.864 3008.698 - 3023.593: 99.9322% ( 1) 00:14:12.864 3932.160 - 3961.949: 99.9407% ( 1) 00:14:12.864 3961.949 - 3991.738: 99.9492% ( 1) 00:14:12.864 3991.738 - 4021.527: 99.9661% ( 2) 00:14:12.864 4021.527 - 4051.316: 99.9831% ( 2) 00:14:12.864 5004.567 - 5034.356: 99.9915% ( 1) 00:14:12.864 5034.356 - 5064.145: 100.0000% ( 1) 00:14:12.864 00:14:12.864 Complete histogram 00:14:12.864 ================== 00:14:12.864 Range in us Cumulative Count 00:14:12.864 2.196 - 2.211: 2.1522% ( 254) 00:14:12.864 2.211 - 2.225: 42.0014% ( 4703) 00:14:12.864 2.225 - 2.240: 74.8348% ( 3875) 00:14:12.864 2.240 - 2.255: 77.8682% ( 358) 00:14:12.864 2.255 - 2.269: 78.7917% ( 109) 00:14:12.864 2.269 - 2.284: 80.2661% ( 174) 00:14:12.864 2.284 - 2.298: 87.0869% ( 805) 00:14:12.864 2.298 - 2.313: 91.5777% ( 530) 00:14:12.864 2.313 - 2.327: 92.5436% ( 114) 00:14:12.864 2.327 - 2.342: 92.9842% ( 52) 00:14:12.864 2.342 - 2.356: 93.3994% ( 49) 00:14:12.864 2.356 - 2.371: 95.0941% ( 200) 00:14:12.864 2.371 - 2.385: 96.3396% ( 147) 00:14:12.864 2.385 - 2.400: 96.6023% ( 31) 00:14:12.864 2.400 - 2.415: 96.7717% ( 20) 00:14:12.864 2.415 - 2.429: 96.9073% ( 16) 00:14:12.864 2.429 - 2.444: 97.1615% ( 30) 00:14:12.864 2.444 - 2.458: 97.3903% ( 27) 00:14:12.864 2.458 - 2.473: 97.4920% ( 12) 00:14:12.864 2.473 - 2.487: 97.5852% ( 11) 00:14:12.864 2.487 - 2.502: 97.6360% ( 6) 00:14:12.864 2.502 - 2.516: 97.6868% ( 6) 00:14:12.864 2.516 - 2.531: 97.7292% ( 5) 00:14:12.864 2.531 - 2.545: 97.8055% ( 9) 00:14:12.864 2.545 - 2.560: 97.8224% ( 2) 00:14:12.864 2.560 - 2.575: 97.8393% ( 2) 00:14:12.864 2.575 - 2.589: 97.8732% ( 4) 00:14:12.864 2.589 - 2.604: 97.8987% ( 3) 00:14:12.864 2.604 - 2.618: 97.9156% ( 2) 00:14:12.864 2.633 - 2.647: 97.9326% ( 2) 00:14:12.864 2.647 - 2.662: 97.9410% ( 1) 00:14:12.864 2.662 - 2.676: 97.9664% ( 3) 00:14:12.864 2.676 - 2.691: 97.9749% ( 1) 00:14:12.864 2.691 - 2.705: 98.0003% ( 3) 00:14:12.864 2.705 - 2.720: 98.0173% ( 2) 00:14:12.864 2.735 - 2.749: 98.0258% ( 1) 00:14:12.864 2.749 - 2.764: 98.0342% ( 1) 00:14:12.864 2.764 - 2.778: 98.0512% ( 2) 00:14:12.864 2.778 - 2.793: 98.0597% ( 1) 00:14:12.864 2.793 - 2.807: 98.0766% ( 2) 00:14:12.864 2.807 - 2.822: 98.0935% ( 2) 00:14:12.864 2.822 - 2.836: 98.1105% ( 2) 00:14:12.864 2.836 - 2.851: 98.1190% ( 1) 00:14:12.864 2.851 - 2.865: 98.1274% ( 1) 00:14:12.864 2.865 - 2.880: 98.1359% ( 1) 00:14:12.864 2.880 - 2.895: 98.1529% ( 2) 00:14:12.864 2.895 - 2.909: 98.1783% ( 3) 00:14:12.864 2.938 - 2.953: 98.1867% ( 1) 00:14:12.864 2.967 - 2.982: 98.1952% ( 1) 00:14:12.864 3.055 - 3.069: 98.2037% ( 1) 00:14:12.864 3.447 - 3.462: 98.2122% ( 1) 00:14:12.864 3.665 - 3.680: 98.2206% ( 1) 00:14:12.864 4.044 - 4.073: 98.2291% ( 1) 00:14:12.864 4.102 - 4.131: 98.2461% ( 2) 00:14:12.864 4.160 - 4.189: 98.2545% ( 1) 00:14:12.864 4.218 - 4.247: 98.2715% ( 2) 00:14:12.864 4.247 - 4.276: 98.2800% ( 1) 00:14:12.864 4.305 - 4.335: 98.2884% ( 1) 00:14:12.864 4.335 - 4.364: 98.3054% ( 2) 00:14:12.864 4.364 - 4.393: 98.3138% ( 1) 00:14:12.864 4.422 - 4.451: 98.3308% ( 2) 00:14:12.864 4.451 - 4.480: 98.3477% ( 2) 00:14:12.864 4.480 - 4.509: 98.3901% ( 5) 00:14:12.864 4.509 - 4.538: 98.3986% ( 1) 00:14:12.864 4.538 - 4.567: 98.4070% ( 1) 00:14:12.864 4.567 - 4.596: 98.4494% ( 5) 00:14:12.864 4.596 - 4.625: 98.4579% ( 1) 00:14:12.864 4.625 - 4.655: 98.4664% ( 1) 00:14:12.864 4.655 - 4.684: 98.4833% ( 2) 00:14:12.864 4.771 - 4.800: 98.5003% ( 2) 00:14:12.864 4.916 - 4.945: 98.5087% ( 1) 00:14:12.864 4.945 - 4.975: 98.5172% ( 1) 00:14:12.864 5.062 - 5.091: 98.5257% ( 1) 00:14:12.864 5.236 - 5.265: 98.5341% ( 1) 00:14:12.864 5.324 - 5.353: 98.5426% ( 1) 00:14:12.864 5.382 - 5.411: 98.5511% ( 1) 00:14:12.864 5.469 - 5.498: 98.5596% ( 1) 00:14:12.864 5.615 - 5.644: 98.5680% ( 1) 00:14:12.864 7.796 - 7.855: 98.5765% ( 1) 00:14:12.864 8.029 - 8.087: 98.5850% ( 1) 00:14:12.864 8.204 - 8.262: 98.5935% ( 1) 00:14:12.864 8.262 - 8.320: 98.6019% ( 1) 00:14:12.864 8.320 - 8.378: 98.6189% ( 2) 00:14:12.864 8.378 - 8.436: 98.6274% ( 1) 00:14:12.864 8.495 - 8.553: 98.6358% ( 1) 00:14:12.864 8.553 - 8.611: 98.6528% ( 2) 00:14:12.864 8.611 - 8.669: 98.6697% ( 2) 00:14:12.864 8.669 - 8.727: 98.6782% ( 1) 00:14:12.864 8.727 - 8.785: 98.6951% ( 2) 00:14:12.864 8.785 - 8.844: 98.7036% ( 1) 00:14:12.864 8.844 - 8.902: 98.7206% ( 2) 00:14:12.864 8.960 - 9.018: 98.7460% ( 3) 00:14:12.864 9.018 - 9.076: 98.7629% ( 2) 00:14:12.864 9.251 - 9.309: 98.7714% ( 1) 00:14:12.864 9.367 - 9.425: 98.7799% ( 1) 00:14:12.864 9.600 - 9.658: 98.7968% ( 2) 00:14:12.864 9.716 - 9.775: 98.8138% ( 2) 00:14:12.864 9.949 - 10.007: 98.8222% ( 1) 00:14:12.864 10.007 - 10.065: 98.8307% ( 1) 00:14:12.864 10.124 - 10.182: 98.8392% ( 1) 00:14:12.864 10.182 - 10.240: 98.8477% ( 1) 00:14:12.864 10.356 - 10.415: 98.8561% ( 1) 00:14:12.864 10.473 - 10.531: 98.8646% ( 1) 00:14:12.864 10.531 - 10.589: 98.8731% ( 1) 00:14:12.864 10.880 - 10.938: 98.8900% ( 2) 00:14:12.864 12.044 - 12.102: 98.8985% ( 1) 00:14:12.864 12.567 - 12.625: 98.9070% ( 1) 00:14:12.864 13.149 - 13.207: 98.9154% ( 1) 00:14:12.864 14.836 - 14.895: 98.9239% ( 1) 00:14:12.864 14.895 - 15.011: 98.9324% ( 1) 00:14:12.864 15.244 - 15.360: 98.9409% ( 1) 00:14:12.864 15.360 - 15.476: 98.9493% ( 1) 00:14:12.864 15.709 - 15.825: 98.9663% ( 2) 00:14:12.864 15.942 - 16.058: 98.9748% ( 1) 00:14:12.864 16.058 - 16.175: 98.9832% ( 1) 00:14:12.864 16.640 - 16.756: 99.0171% ( 4) 00:14:12.864 16.756 - 16.873: 99.0510% ( 4) 00:14:12.864 16.873 - 16.989: 99.0595% ( 1) 00:14:12.864 16.989 - 17.105: 99.0764% ( 2) 00:14:12.864 17.105 - 17.222: 99.0934% ( 2) 00:14:12.864 17.222 - 17.338: 99.1188% ( 3) 00:14:12.864 17.338 - 17.455: 99.1273% ( 1) 00:14:12.864 17.455 - 17.571: 99.1357% ( 1) 00:14:12.864 17.571 - 17.687: 99.1527% ( 2) 00:14:12.864 17.804 - 17.920: 99.1612% ( 1) 00:14:12.864 17.920 - 18.036: 99.1781% ( 2) 00:14:12.864 18.153 - 18.269: 99.2035% ( 3) 00:14:12.864 18.269 - 18.385: 99.2205% ( 2) 00:14:12.864 18.385 - 18.502: 99.2289% ( 1) 00:14:12.864 18.618 - 18.735: 99.2459% ( 2) 00:14:12.864 18.735 - 18.851: 99.2713% ( 3) 00:14:12.864 19.200 - 19.316: 99.2883% ( 2) 00:14:12.864 19.898 - 20.015: 99.2967% ( 1) 00:14:12.864 20.364 - 20.480: 99.3052% ( 1) 00:14:12.864 21.876 - 21.993: 99.3137% ( 1) 00:14:12.864 28.742 - 28.858: 99.3221% ( 1) 00:14:12.864 30.255 - 30.487: 99.3306% ( 1) 00:14:12.864 983.040 - 990.487: 99.3391% ( 1) 00:14:12.864 990.487 - 997.935: 99.3476% ( 1) 00:14:12.864 997.935 - 1005.382: 99.3560% ( 1) 00:14:12.865 1012.829 - 1020.276: 99.3730% ( 2) 00:14:12.865 1020.276 - 1027.724: 99.4069% ( 4) 00:14:12.865 1050.065 - 1057.513: 99.4323% ( 3) 00:14:12.865 1072.407 - 1079.855: 99.4408% ( 1) 00:14:12.865 1980.975 - 1995.869: 99.4492% ( 1) 00:14:12.865 2010.764 - 2025.658: 99.4662% ( 2) 00:14:12.865 2040.553 - 2055.447: 99.4831% ( 2) 00:14:12.865 2978.909 - 2993.804: 99.4916% ( 1) 00:14:12.865 3008.698 - 3023.593: 99.5001% ( 1) 00:14:12.865 3038.487 - 3053.382: 99.5170% ( 2) 00:14:12.865 3961.949 - 3991.738: 99.5594% ( 5) 00:14:12.865 3991.738 - 4021.527: 99.7543% ( 23) 00:14:12.865 4021.527 - 4051.316: 99.8221% ( 8) 00:14:12.865 4051.316 - 4081.105: 99.8305% ( 1) 00:14:12.865 4974.778 - 5004.567: 99.8729% ( 5) 00:14:12.865 5004.567 - 5034.356: 99.9831% ( 13) 00:14:12.865 5034.356 - 5064.145: 100.0000% ( 2) 00:14:12.865 00:14:12.865 03:57:47 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:12.865 03:57:47 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:12.865 03:57:47 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:12.865 03:57:47 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:12.865 03:57:47 -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:13.124 [ 00:14:13.124 { 00:14:13.124 "allow_any_host": true, 00:14:13.124 "hosts": [], 00:14:13.124 "listen_addresses": [], 00:14:13.124 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:13.124 "subtype": "Discovery" 00:14:13.124 }, 00:14:13.124 { 00:14:13.124 "allow_any_host": true, 00:14:13.124 "hosts": [], 00:14:13.124 "listen_addresses": [ 00:14:13.124 { 00:14:13.124 "adrfam": "IPv4", 00:14:13.124 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:13.124 "transport": "VFIOUSER", 00:14:13.124 "trsvcid": "0", 00:14:13.124 "trtype": "VFIOUSER" 00:14:13.124 } 00:14:13.124 ], 00:14:13.124 "max_cntlid": 65519, 00:14:13.124 "max_namespaces": 32, 00:14:13.124 "min_cntlid": 1, 00:14:13.124 "model_number": "SPDK bdev Controller", 00:14:13.124 "namespaces": [ 00:14:13.124 { 00:14:13.124 "bdev_name": "Malloc1", 00:14:13.124 "name": "Malloc1", 00:14:13.124 "nguid": "A944AA8C2A6B4291BE2F8CCD4840F1C1", 00:14:13.124 "nsid": 1, 00:14:13.124 "uuid": "a944aa8c-2a6b-4291-be2f-8ccd4840f1c1" 00:14:13.124 }, 00:14:13.124 { 00:14:13.124 "bdev_name": "Malloc3", 00:14:13.124 "name": "Malloc3", 00:14:13.124 "nguid": "E744DEAC3AF44D3AB9A4679BC19C221A", 00:14:13.124 "nsid": 2, 00:14:13.124 "uuid": "e744deac-3af4-4d3a-b9a4-679bc19c221a" 00:14:13.124 } 00:14:13.124 ], 00:14:13.124 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:13.124 "serial_number": "SPDK1", 00:14:13.124 "subtype": "NVMe" 00:14:13.124 }, 00:14:13.124 { 00:14:13.124 "allow_any_host": true, 00:14:13.124 "hosts": [], 00:14:13.124 "listen_addresses": [ 00:14:13.124 { 00:14:13.124 "adrfam": "IPv4", 00:14:13.124 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:13.124 "transport": "VFIOUSER", 00:14:13.124 "trsvcid": "0", 00:14:13.124 "trtype": "VFIOUSER" 00:14:13.124 } 00:14:13.124 ], 00:14:13.124 "max_cntlid": 65519, 00:14:13.124 "max_namespaces": 32, 00:14:13.124 "min_cntlid": 1, 00:14:13.124 "model_number": "SPDK bdev Controller", 00:14:13.124 "namespaces": [ 00:14:13.124 { 00:14:13.124 "bdev_name": "Malloc2", 00:14:13.124 "name": "Malloc2", 00:14:13.124 "nguid": "9872D21EF98B4AC8B0C07A8DB1ACC6E0", 00:14:13.124 "nsid": 1, 00:14:13.124 "uuid": "9872d21e-f98b-4ac8-b0c0-7a8db1acc6e0" 00:14:13.124 } 00:14:13.124 ], 00:14:13.124 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:13.124 "serial_number": "SPDK2", 00:14:13.124 "subtype": "NVMe" 00:14:13.124 } 00:14:13.124 ] 00:14:13.124 03:57:48 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:13.124 03:57:48 -- target/nvmf_vfio_user.sh@34 -- # aerpid=71482 00:14:13.124 03:57:48 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:13.124 03:57:48 -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:13.124 03:57:48 -- common/autotest_common.sh@1254 -- # local i=0 00:14:13.124 03:57:48 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:13.124 03:57:48 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:14:13.124 03:57:48 -- common/autotest_common.sh@1257 -- # i=1 00:14:13.124 03:57:48 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:14:13.124 03:57:48 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:13.124 03:57:48 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:14:13.124 03:57:48 -- common/autotest_common.sh@1257 -- # i=2 00:14:13.124 03:57:48 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:14:13.124 03:57:48 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:13.382 03:57:48 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:13.382 03:57:48 -- common/autotest_common.sh@1265 -- # return 0 00:14:13.382 03:57:48 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:13.382 03:57:48 -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:13.641 Malloc4 00:14:13.641 03:57:48 -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:13.899 03:57:48 -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:13.899 Asynchronous Event Request test 00:14:13.899 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:13.899 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:13.899 Registering asynchronous event callbacks... 00:14:13.899 Starting namespace attribute notice tests for all controllers... 00:14:13.899 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:13.899 aer_cb - Changed Namespace 00:14:13.899 Cleaning up... 00:14:14.158 [ 00:14:14.158 { 00:14:14.158 "allow_any_host": true, 00:14:14.158 "hosts": [], 00:14:14.158 "listen_addresses": [], 00:14:14.158 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:14.158 "subtype": "Discovery" 00:14:14.158 }, 00:14:14.158 { 00:14:14.158 "allow_any_host": true, 00:14:14.158 "hosts": [], 00:14:14.158 "listen_addresses": [ 00:14:14.158 { 00:14:14.158 "adrfam": "IPv4", 00:14:14.158 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:14.158 "transport": "VFIOUSER", 00:14:14.158 "trsvcid": "0", 00:14:14.158 "trtype": "VFIOUSER" 00:14:14.158 } 00:14:14.158 ], 00:14:14.158 "max_cntlid": 65519, 00:14:14.158 "max_namespaces": 32, 00:14:14.158 "min_cntlid": 1, 00:14:14.158 "model_number": "SPDK bdev Controller", 00:14:14.158 "namespaces": [ 00:14:14.158 { 00:14:14.158 "bdev_name": "Malloc1", 00:14:14.158 "name": "Malloc1", 00:14:14.159 "nguid": "A944AA8C2A6B4291BE2F8CCD4840F1C1", 00:14:14.159 "nsid": 1, 00:14:14.159 "uuid": "a944aa8c-2a6b-4291-be2f-8ccd4840f1c1" 00:14:14.159 }, 00:14:14.159 { 00:14:14.159 "bdev_name": "Malloc3", 00:14:14.159 "name": "Malloc3", 00:14:14.159 "nguid": "E744DEAC3AF44D3AB9A4679BC19C221A", 00:14:14.159 "nsid": 2, 00:14:14.159 "uuid": "e744deac-3af4-4d3a-b9a4-679bc19c221a" 00:14:14.159 } 00:14:14.159 ], 00:14:14.159 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:14.159 "serial_number": "SPDK1", 00:14:14.159 "subtype": "NVMe" 00:14:14.159 }, 00:14:14.159 { 00:14:14.159 "allow_any_host": true, 00:14:14.159 "hosts": [], 00:14:14.159 "listen_addresses": [ 00:14:14.159 { 00:14:14.159 "adrfam": "IPv4", 00:14:14.159 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:14.159 "transport": "VFIOUSER", 00:14:14.159 "trsvcid": "0", 00:14:14.159 "trtype": "VFIOUSER" 00:14:14.159 } 00:14:14.159 ], 00:14:14.159 "max_cntlid": 65519, 00:14:14.159 "max_namespaces": 32, 00:14:14.159 "min_cntlid": 1, 00:14:14.159 "model_number": "SPDK bdev Controller", 00:14:14.159 "namespaces": [ 00:14:14.159 { 00:14:14.159 "bdev_name": "Malloc2", 00:14:14.159 "name": "Malloc2", 00:14:14.159 "nguid": "9872D21EF98B4AC8B0C07A8DB1ACC6E0", 00:14:14.159 "nsid": 1, 00:14:14.159 "uuid": "9872d21e-f98b-4ac8-b0c0-7a8db1acc6e0" 00:14:14.159 }, 00:14:14.159 { 00:14:14.159 "bdev_name": "Malloc4", 00:14:14.159 "name": "Malloc4", 00:14:14.159 "nguid": "E4DCB50F7F0A405DBE5D530D2229F644", 00:14:14.159 "nsid": 2, 00:14:14.159 "uuid": "e4dcb50f-7f0a-405d-be5d-530d2229f644" 00:14:14.159 } 00:14:14.159 ], 00:14:14.159 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:14.159 "serial_number": "SPDK2", 00:14:14.159 "subtype": "NVMe" 00:14:14.159 } 00:14:14.159 ] 00:14:14.159 03:57:49 -- target/nvmf_vfio_user.sh@44 -- # wait 71482 00:14:14.159 03:57:49 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:14.159 03:57:49 -- target/nvmf_vfio_user.sh@95 -- # killprocess 70800 00:14:14.159 03:57:49 -- common/autotest_common.sh@936 -- # '[' -z 70800 ']' 00:14:14.159 03:57:49 -- common/autotest_common.sh@940 -- # kill -0 70800 00:14:14.159 03:57:49 -- common/autotest_common.sh@941 -- # uname 00:14:14.159 03:57:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:14.159 03:57:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70800 00:14:14.159 killing process with pid 70800 00:14:14.159 03:57:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:14.159 03:57:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:14.159 03:57:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70800' 00:14:14.159 03:57:49 -- common/autotest_common.sh@955 -- # kill 70800 00:14:14.159 [2024-11-08 03:57:49.183107] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:14:14.159 03:57:49 -- common/autotest_common.sh@960 -- # wait 70800 00:14:14.727 03:57:49 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:14.727 Process pid: 71535 00:14:14.727 03:57:49 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:14.727 03:57:49 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:14.727 03:57:49 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:14.727 03:57:49 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:14.727 03:57:49 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=71535 00:14:14.727 03:57:49 -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:14.727 03:57:49 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 71535' 00:14:14.727 03:57:49 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:14.727 03:57:49 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 71535 00:14:14.727 03:57:49 -- common/autotest_common.sh@829 -- # '[' -z 71535 ']' 00:14:14.727 03:57:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.727 03:57:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:14.727 03:57:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.727 03:57:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:14.727 03:57:49 -- common/autotest_common.sh@10 -- # set +x 00:14:14.727 [2024-11-08 03:57:49.747155] thread.c:2929:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:14.727 [2024-11-08 03:57:49.748186] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:14.727 [2024-11-08 03:57:49.748264] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.985 [2024-11-08 03:57:49.883302] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:14.985 [2024-11-08 03:57:50.028089] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:14.985 [2024-11-08 03:57:50.028248] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:14.985 [2024-11-08 03:57:50.028262] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:14.985 [2024-11-08 03:57:50.028271] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:14.985 [2024-11-08 03:57:50.028455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.985 [2024-11-08 03:57:50.028588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:14.985 [2024-11-08 03:57:50.029205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:14.985 [2024-11-08 03:57:50.029264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.243 [2024-11-08 03:57:50.147158] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:14:15.243 [2024-11-08 03:57:50.153656] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:14:15.243 [2024-11-08 03:57:50.153910] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:14:15.243 [2024-11-08 03:57:50.154767] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:15.243 [2024-11-08 03:57:50.154921] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:14:15.810 03:57:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:15.810 03:57:50 -- common/autotest_common.sh@862 -- # return 0 00:14:15.810 03:57:50 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:16.745 03:57:51 -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:17.004 03:57:52 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:17.004 03:57:52 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:17.004 03:57:52 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:17.004 03:57:52 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:17.004 03:57:52 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:17.263 Malloc1 00:14:17.263 03:57:52 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:17.521 03:57:52 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:17.780 03:57:52 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:18.039 03:57:53 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:18.039 03:57:53 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:18.039 03:57:53 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:18.297 Malloc2 00:14:18.297 03:57:53 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:18.865 03:57:53 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:18.865 03:57:53 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:19.124 03:57:54 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:19.124 03:57:54 -- target/nvmf_vfio_user.sh@95 -- # killprocess 71535 00:14:19.124 03:57:54 -- common/autotest_common.sh@936 -- # '[' -z 71535 ']' 00:14:19.124 03:57:54 -- common/autotest_common.sh@940 -- # kill -0 71535 00:14:19.124 03:57:54 -- common/autotest_common.sh@941 -- # uname 00:14:19.124 03:57:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:19.124 03:57:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71535 00:14:19.124 killing process with pid 71535 00:14:19.124 03:57:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:19.124 03:57:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:19.124 03:57:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71535' 00:14:19.124 03:57:54 -- common/autotest_common.sh@955 -- # kill 71535 00:14:19.124 03:57:54 -- common/autotest_common.sh@960 -- # wait 71535 00:14:19.705 03:57:54 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:19.705 03:57:54 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:19.705 00:14:19.705 real 0m56.125s 00:14:19.705 user 3m40.578s 00:14:19.705 sys 0m3.924s 00:14:19.705 03:57:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:19.705 03:57:54 -- common/autotest_common.sh@10 -- # set +x 00:14:19.705 ************************************ 00:14:19.705 END TEST nvmf_vfio_user 00:14:19.705 ************************************ 00:14:19.705 03:57:54 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user_nvme_compliance /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:19.705 03:57:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:19.705 03:57:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:19.705 03:57:54 -- common/autotest_common.sh@10 -- # set +x 00:14:19.705 ************************************ 00:14:19.705 START TEST nvmf_vfio_user_nvme_compliance 00:14:19.705 ************************************ 00:14:19.705 03:57:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:19.705 * Looking for test storage... 00:14:19.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/compliance 00:14:19.705 03:57:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:19.705 03:57:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:19.705 03:57:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:19.975 03:57:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:19.975 03:57:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:19.975 03:57:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:19.975 03:57:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:19.975 03:57:54 -- scripts/common.sh@335 -- # IFS=.-: 00:14:19.975 03:57:54 -- scripts/common.sh@335 -- # read -ra ver1 00:14:19.975 03:57:54 -- scripts/common.sh@336 -- # IFS=.-: 00:14:19.975 03:57:54 -- scripts/common.sh@336 -- # read -ra ver2 00:14:19.975 03:57:54 -- scripts/common.sh@337 -- # local 'op=<' 00:14:19.975 03:57:54 -- scripts/common.sh@339 -- # ver1_l=2 00:14:19.975 03:57:54 -- scripts/common.sh@340 -- # ver2_l=1 00:14:19.975 03:57:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:19.975 03:57:54 -- scripts/common.sh@343 -- # case "$op" in 00:14:19.975 03:57:54 -- scripts/common.sh@344 -- # : 1 00:14:19.975 03:57:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:19.975 03:57:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:19.975 03:57:54 -- scripts/common.sh@364 -- # decimal 1 00:14:19.976 03:57:54 -- scripts/common.sh@352 -- # local d=1 00:14:19.976 03:57:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:19.976 03:57:54 -- scripts/common.sh@354 -- # echo 1 00:14:19.976 03:57:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:19.976 03:57:54 -- scripts/common.sh@365 -- # decimal 2 00:14:19.976 03:57:54 -- scripts/common.sh@352 -- # local d=2 00:14:19.976 03:57:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:19.976 03:57:54 -- scripts/common.sh@354 -- # echo 2 00:14:19.976 03:57:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:19.976 03:57:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:19.976 03:57:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:19.976 03:57:54 -- scripts/common.sh@367 -- # return 0 00:14:19.976 03:57:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:19.976 03:57:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:19.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.976 --rc genhtml_branch_coverage=1 00:14:19.976 --rc genhtml_function_coverage=1 00:14:19.976 --rc genhtml_legend=1 00:14:19.976 --rc geninfo_all_blocks=1 00:14:19.976 --rc geninfo_unexecuted_blocks=1 00:14:19.976 00:14:19.976 ' 00:14:19.976 03:57:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:19.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.976 --rc genhtml_branch_coverage=1 00:14:19.976 --rc genhtml_function_coverage=1 00:14:19.976 --rc genhtml_legend=1 00:14:19.976 --rc geninfo_all_blocks=1 00:14:19.976 --rc geninfo_unexecuted_blocks=1 00:14:19.976 00:14:19.976 ' 00:14:19.976 03:57:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:19.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.976 --rc genhtml_branch_coverage=1 00:14:19.976 --rc genhtml_function_coverage=1 00:14:19.976 --rc genhtml_legend=1 00:14:19.976 --rc geninfo_all_blocks=1 00:14:19.976 --rc geninfo_unexecuted_blocks=1 00:14:19.976 00:14:19.976 ' 00:14:19.976 03:57:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:19.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.976 --rc genhtml_branch_coverage=1 00:14:19.976 --rc genhtml_function_coverage=1 00:14:19.976 --rc genhtml_legend=1 00:14:19.976 --rc geninfo_all_blocks=1 00:14:19.976 --rc geninfo_unexecuted_blocks=1 00:14:19.976 00:14:19.976 ' 00:14:19.976 03:57:54 -- compliance/compliance.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:19.976 03:57:54 -- nvmf/common.sh@7 -- # uname -s 00:14:19.976 03:57:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:19.976 03:57:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:19.976 03:57:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:19.976 03:57:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:19.976 03:57:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:19.976 03:57:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:19.976 03:57:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:19.976 03:57:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:19.976 03:57:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:19.976 03:57:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:19.976 03:57:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:14:19.976 03:57:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:14:19.976 03:57:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:19.976 03:57:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:19.976 03:57:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:19.976 03:57:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:19.976 03:57:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:19.976 03:57:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:19.976 03:57:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:19.976 03:57:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.976 03:57:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.976 03:57:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.976 03:57:54 -- paths/export.sh@5 -- # export PATH 00:14:19.976 03:57:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.976 03:57:54 -- nvmf/common.sh@46 -- # : 0 00:14:19.976 03:57:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:19.976 03:57:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:19.976 03:57:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:19.976 03:57:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:19.976 03:57:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:19.976 03:57:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:19.976 03:57:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:19.976 03:57:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:19.976 03:57:54 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:19.976 03:57:54 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:19.976 03:57:54 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:19.976 03:57:54 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:19.976 03:57:54 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:19.976 03:57:54 -- compliance/compliance.sh@20 -- # nvmfpid=71733 00:14:19.976 Process pid: 71733 00:14:19.976 03:57:54 -- compliance/compliance.sh@21 -- # echo 'Process pid: 71733' 00:14:19.976 03:57:54 -- compliance/compliance.sh@19 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:19.976 03:57:54 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:19.976 03:57:54 -- compliance/compliance.sh@24 -- # waitforlisten 71733 00:14:19.976 03:57:54 -- common/autotest_common.sh@829 -- # '[' -z 71733 ']' 00:14:19.976 03:57:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.976 03:57:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:19.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.976 03:57:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.976 03:57:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:19.976 03:57:54 -- common/autotest_common.sh@10 -- # set +x 00:14:19.976 [2024-11-08 03:57:54.946376] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:19.976 [2024-11-08 03:57:54.946502] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.976 [2024-11-08 03:57:55.080696] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:20.235 [2024-11-08 03:57:55.229616] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:20.235 [2024-11-08 03:57:55.229789] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:20.235 [2024-11-08 03:57:55.229802] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:20.235 [2024-11-08 03:57:55.229810] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:20.235 [2024-11-08 03:57:55.230009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.235 [2024-11-08 03:57:55.230165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:20.235 [2024-11-08 03:57:55.230173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.801 03:57:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:20.801 03:57:55 -- common/autotest_common.sh@862 -- # return 0 00:14:20.801 03:57:55 -- compliance/compliance.sh@26 -- # sleep 1 00:14:22.176 03:57:56 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:22.176 03:57:56 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:22.176 03:57:56 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:22.176 03:57:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.176 03:57:56 -- common/autotest_common.sh@10 -- # set +x 00:14:22.176 03:57:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.176 03:57:56 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:22.176 03:57:56 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:22.176 03:57:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.176 03:57:56 -- common/autotest_common.sh@10 -- # set +x 00:14:22.176 malloc0 00:14:22.176 03:57:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.176 03:57:56 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:22.176 03:57:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.176 03:57:56 -- common/autotest_common.sh@10 -- # set +x 00:14:22.176 03:57:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.176 03:57:56 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:22.176 03:57:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.176 03:57:56 -- common/autotest_common.sh@10 -- # set +x 00:14:22.176 03:57:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.176 03:57:56 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:22.176 03:57:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.176 03:57:56 -- common/autotest_common.sh@10 -- # set +x 00:14:22.176 03:57:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.176 03:57:56 -- compliance/compliance.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:22.176 00:14:22.176 00:14:22.176 CUnit - A unit testing framework for C - Version 2.1-3 00:14:22.176 http://cunit.sourceforge.net/ 00:14:22.176 00:14:22.176 00:14:22.176 Suite: nvme_compliance 00:14:22.176 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-08 03:57:57.174645] vfio_user.c: 789:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:22.176 [2024-11-08 03:57:57.174737] vfio_user.c:5484:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:22.176 [2024-11-08 03:57:57.174749] vfio_user.c:5576:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:22.176 passed 00:14:22.434 Test: admin_identify_ctrlr_verify_fused ...passed 00:14:22.434 Test: admin_identify_ns ...[2024-11-08 03:57:57.412440] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:22.434 [2024-11-08 03:57:57.420457] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:22.434 passed 00:14:22.692 Test: admin_get_features_mandatory_features ...passed 00:14:22.692 Test: admin_get_features_optional_features ...passed 00:14:22.950 Test: admin_set_features_number_of_queues ...passed 00:14:22.950 Test: admin_get_log_page_mandatory_logs ...passed 00:14:22.950 Test: admin_get_log_page_with_lpo ...[2024-11-08 03:57:58.043442] ctrlr.c:2546:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:23.209 passed 00:14:23.209 Test: fabric_property_get ...passed 00:14:23.209 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-08 03:57:58.231176] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:23.209 passed 00:14:23.467 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-08 03:57:58.400438] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:23.467 [2024-11-08 03:57:58.419432] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:23.467 passed 00:14:23.467 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-08 03:57:58.510532] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:23.467 passed 00:14:23.725 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-08 03:57:58.678489] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:23.725 [2024-11-08 03:57:58.702442] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:23.725 passed 00:14:23.725 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-08 03:57:58.795557] vfio_user.c:2150:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:23.725 [2024-11-08 03:57:58.795658] vfio_user.c:2144:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:23.725 passed 00:14:23.983 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-08 03:57:58.972441] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:23.983 [2024-11-08 03:57:58.980430] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:23.983 [2024-11-08 03:57:58.991433] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:23.983 [2024-11-08 03:57:58.999432] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:23.983 passed 00:14:24.241 Test: admin_create_io_sq_verify_pc ...[2024-11-08 03:57:59.130496] vfio_user.c:2044:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:24.241 passed 00:14:25.613 Test: admin_create_io_qp_max_qps ...[2024-11-08 03:58:00.341436] nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:14:25.871 passed 00:14:25.871 Test: admin_create_io_sq_shared_cq ...[2024-11-08 03:58:00.954439] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:26.130 passed 00:14:26.130 00:14:26.130 Run Summary: Type Total Ran Passed Failed Inactive 00:14:26.130 suites 1 1 n/a 0 0 00:14:26.130 tests 18 18 18 0 0 00:14:26.130 asserts 360 360 360 0 n/a 00:14:26.130 00:14:26.130 Elapsed time = 1.585 seconds 00:14:26.130 03:58:01 -- compliance/compliance.sh@42 -- # killprocess 71733 00:14:26.130 03:58:01 -- common/autotest_common.sh@936 -- # '[' -z 71733 ']' 00:14:26.130 03:58:01 -- common/autotest_common.sh@940 -- # kill -0 71733 00:14:26.130 03:58:01 -- common/autotest_common.sh@941 -- # uname 00:14:26.130 03:58:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:26.130 03:58:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71733 00:14:26.130 killing process with pid 71733 00:14:26.130 03:58:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:26.130 03:58:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:26.130 03:58:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71733' 00:14:26.130 03:58:01 -- common/autotest_common.sh@955 -- # kill 71733 00:14:26.130 03:58:01 -- common/autotest_common.sh@960 -- # wait 71733 00:14:26.387 03:58:01 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:26.387 03:58:01 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:26.387 00:14:26.387 real 0m6.777s 00:14:26.387 user 0m18.594s 00:14:26.387 sys 0m0.632s 00:14:26.387 03:58:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:26.387 03:58:01 -- common/autotest_common.sh@10 -- # set +x 00:14:26.387 ************************************ 00:14:26.387 END TEST nvmf_vfio_user_nvme_compliance 00:14:26.387 ************************************ 00:14:26.645 03:58:01 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:26.646 03:58:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:26.646 03:58:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:26.646 03:58:01 -- common/autotest_common.sh@10 -- # set +x 00:14:26.646 ************************************ 00:14:26.646 START TEST nvmf_vfio_user_fuzz 00:14:26.646 ************************************ 00:14:26.646 03:58:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:26.646 * Looking for test storage... 00:14:26.646 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:26.646 03:58:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:26.646 03:58:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:26.646 03:58:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:26.646 03:58:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:26.646 03:58:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:26.646 03:58:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:26.646 03:58:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:26.646 03:58:01 -- scripts/common.sh@335 -- # IFS=.-: 00:14:26.646 03:58:01 -- scripts/common.sh@335 -- # read -ra ver1 00:14:26.646 03:58:01 -- scripts/common.sh@336 -- # IFS=.-: 00:14:26.646 03:58:01 -- scripts/common.sh@336 -- # read -ra ver2 00:14:26.646 03:58:01 -- scripts/common.sh@337 -- # local 'op=<' 00:14:26.646 03:58:01 -- scripts/common.sh@339 -- # ver1_l=2 00:14:26.646 03:58:01 -- scripts/common.sh@340 -- # ver2_l=1 00:14:26.646 03:58:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:26.646 03:58:01 -- scripts/common.sh@343 -- # case "$op" in 00:14:26.646 03:58:01 -- scripts/common.sh@344 -- # : 1 00:14:26.646 03:58:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:26.646 03:58:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:26.646 03:58:01 -- scripts/common.sh@364 -- # decimal 1 00:14:26.646 03:58:01 -- scripts/common.sh@352 -- # local d=1 00:14:26.646 03:58:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:26.646 03:58:01 -- scripts/common.sh@354 -- # echo 1 00:14:26.646 03:58:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:26.646 03:58:01 -- scripts/common.sh@365 -- # decimal 2 00:14:26.646 03:58:01 -- scripts/common.sh@352 -- # local d=2 00:14:26.646 03:58:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:26.646 03:58:01 -- scripts/common.sh@354 -- # echo 2 00:14:26.646 03:58:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:26.646 03:58:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:26.646 03:58:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:26.646 03:58:01 -- scripts/common.sh@367 -- # return 0 00:14:26.646 03:58:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:26.646 03:58:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:26.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.646 --rc genhtml_branch_coverage=1 00:14:26.646 --rc genhtml_function_coverage=1 00:14:26.646 --rc genhtml_legend=1 00:14:26.646 --rc geninfo_all_blocks=1 00:14:26.646 --rc geninfo_unexecuted_blocks=1 00:14:26.646 00:14:26.646 ' 00:14:26.646 03:58:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:26.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.646 --rc genhtml_branch_coverage=1 00:14:26.646 --rc genhtml_function_coverage=1 00:14:26.646 --rc genhtml_legend=1 00:14:26.646 --rc geninfo_all_blocks=1 00:14:26.646 --rc geninfo_unexecuted_blocks=1 00:14:26.646 00:14:26.646 ' 00:14:26.646 03:58:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:26.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.646 --rc genhtml_branch_coverage=1 00:14:26.646 --rc genhtml_function_coverage=1 00:14:26.646 --rc genhtml_legend=1 00:14:26.646 --rc geninfo_all_blocks=1 00:14:26.646 --rc geninfo_unexecuted_blocks=1 00:14:26.646 00:14:26.646 ' 00:14:26.646 03:58:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:26.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.646 --rc genhtml_branch_coverage=1 00:14:26.646 --rc genhtml_function_coverage=1 00:14:26.646 --rc genhtml_legend=1 00:14:26.646 --rc geninfo_all_blocks=1 00:14:26.646 --rc geninfo_unexecuted_blocks=1 00:14:26.646 00:14:26.646 ' 00:14:26.646 03:58:01 -- target/vfio_user_fuzz.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:26.646 03:58:01 -- nvmf/common.sh@7 -- # uname -s 00:14:26.646 03:58:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:26.646 03:58:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:26.646 03:58:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:26.646 03:58:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:26.646 03:58:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:26.646 03:58:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:26.646 03:58:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:26.646 03:58:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:26.646 03:58:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:26.646 03:58:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:26.646 03:58:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:14:26.646 03:58:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:14:26.646 03:58:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:26.646 03:58:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:26.646 03:58:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:26.646 03:58:01 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:26.646 03:58:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:26.646 03:58:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:26.646 03:58:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:26.646 03:58:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.646 03:58:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.646 03:58:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.646 03:58:01 -- paths/export.sh@5 -- # export PATH 00:14:26.646 03:58:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.646 03:58:01 -- nvmf/common.sh@46 -- # : 0 00:14:26.646 03:58:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:26.646 03:58:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:26.646 03:58:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:26.646 03:58:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:26.646 03:58:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:26.646 03:58:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:26.646 03:58:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:26.646 03:58:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:26.646 03:58:01 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:26.646 03:58:01 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:26.646 03:58:01 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:26.646 03:58:01 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:26.646 03:58:01 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:26.646 03:58:01 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:26.646 03:58:01 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:26.646 03:58:01 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=71893 00:14:26.646 Process pid: 71893 00:14:26.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.646 03:58:01 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 71893' 00:14:26.646 03:58:01 -- target/vfio_user_fuzz.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:26.646 03:58:01 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:26.646 03:58:01 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 71893 00:14:26.646 03:58:01 -- common/autotest_common.sh@829 -- # '[' -z 71893 ']' 00:14:26.646 03:58:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.646 03:58:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:26.646 03:58:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.646 03:58:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:26.646 03:58:01 -- common/autotest_common.sh@10 -- # set +x 00:14:28.023 03:58:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:28.023 03:58:02 -- common/autotest_common.sh@862 -- # return 0 00:14:28.023 03:58:02 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:28.958 03:58:03 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:28.958 03:58:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.958 03:58:03 -- common/autotest_common.sh@10 -- # set +x 00:14:28.958 03:58:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.958 03:58:03 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:28.958 03:58:03 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:28.958 03:58:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.958 03:58:03 -- common/autotest_common.sh@10 -- # set +x 00:14:28.958 malloc0 00:14:28.958 03:58:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.958 03:58:03 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:28.958 03:58:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.958 03:58:03 -- common/autotest_common.sh@10 -- # set +x 00:14:28.958 03:58:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.958 03:58:03 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:28.958 03:58:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.958 03:58:03 -- common/autotest_common.sh@10 -- # set +x 00:14:28.958 03:58:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.958 03:58:03 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:28.958 03:58:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.958 03:58:03 -- common/autotest_common.sh@10 -- # set +x 00:14:28.958 03:58:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.958 03:58:03 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:28.958 03:58:03 -- target/vfio_user_fuzz.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/vfio_user_fuzz -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:29.216 Shutting down the fuzz application 00:14:29.216 03:58:04 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:29.216 03:58:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.216 03:58:04 -- common/autotest_common.sh@10 -- # set +x 00:14:29.475 03:58:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.475 03:58:04 -- target/vfio_user_fuzz.sh@46 -- # killprocess 71893 00:14:29.475 03:58:04 -- common/autotest_common.sh@936 -- # '[' -z 71893 ']' 00:14:29.475 03:58:04 -- common/autotest_common.sh@940 -- # kill -0 71893 00:14:29.475 03:58:04 -- common/autotest_common.sh@941 -- # uname 00:14:29.475 03:58:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:29.475 03:58:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71893 00:14:29.475 killing process with pid 71893 00:14:29.475 03:58:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:29.475 03:58:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:29.475 03:58:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71893' 00:14:29.475 03:58:04 -- common/autotest_common.sh@955 -- # kill 71893 00:14:29.475 03:58:04 -- common/autotest_common.sh@960 -- # wait 71893 00:14:29.734 03:58:04 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_log.txt /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:29.734 03:58:04 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:29.734 00:14:29.734 real 0m3.238s 00:14:29.734 user 0m3.581s 00:14:29.734 sys 0m0.474s 00:14:29.734 ************************************ 00:14:29.734 END TEST nvmf_vfio_user_fuzz 00:14:29.734 ************************************ 00:14:29.734 03:58:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:29.734 03:58:04 -- common/autotest_common.sh@10 -- # set +x 00:14:29.734 03:58:04 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:29.734 03:58:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:29.734 03:58:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:29.734 03:58:04 -- common/autotest_common.sh@10 -- # set +x 00:14:29.734 ************************************ 00:14:29.734 START TEST nvmf_host_management 00:14:29.734 ************************************ 00:14:29.734 03:58:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:29.994 * Looking for test storage... 00:14:29.994 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:29.994 03:58:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:29.994 03:58:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:29.994 03:58:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:29.994 03:58:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:29.994 03:58:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:29.994 03:58:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:29.994 03:58:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:29.994 03:58:04 -- scripts/common.sh@335 -- # IFS=.-: 00:14:29.994 03:58:04 -- scripts/common.sh@335 -- # read -ra ver1 00:14:29.994 03:58:04 -- scripts/common.sh@336 -- # IFS=.-: 00:14:29.994 03:58:04 -- scripts/common.sh@336 -- # read -ra ver2 00:14:29.994 03:58:04 -- scripts/common.sh@337 -- # local 'op=<' 00:14:29.994 03:58:04 -- scripts/common.sh@339 -- # ver1_l=2 00:14:29.994 03:58:04 -- scripts/common.sh@340 -- # ver2_l=1 00:14:29.994 03:58:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:29.994 03:58:04 -- scripts/common.sh@343 -- # case "$op" in 00:14:29.994 03:58:04 -- scripts/common.sh@344 -- # : 1 00:14:29.994 03:58:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:29.994 03:58:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:29.994 03:58:04 -- scripts/common.sh@364 -- # decimal 1 00:14:29.994 03:58:04 -- scripts/common.sh@352 -- # local d=1 00:14:29.994 03:58:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:29.994 03:58:04 -- scripts/common.sh@354 -- # echo 1 00:14:29.994 03:58:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:29.994 03:58:04 -- scripts/common.sh@365 -- # decimal 2 00:14:29.994 03:58:04 -- scripts/common.sh@352 -- # local d=2 00:14:29.994 03:58:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:29.994 03:58:04 -- scripts/common.sh@354 -- # echo 2 00:14:29.994 03:58:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:29.994 03:58:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:29.994 03:58:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:29.994 03:58:04 -- scripts/common.sh@367 -- # return 0 00:14:29.994 03:58:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:29.994 03:58:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:29.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.994 --rc genhtml_branch_coverage=1 00:14:29.994 --rc genhtml_function_coverage=1 00:14:29.994 --rc genhtml_legend=1 00:14:29.994 --rc geninfo_all_blocks=1 00:14:29.994 --rc geninfo_unexecuted_blocks=1 00:14:29.994 00:14:29.994 ' 00:14:29.994 03:58:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:29.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.994 --rc genhtml_branch_coverage=1 00:14:29.994 --rc genhtml_function_coverage=1 00:14:29.994 --rc genhtml_legend=1 00:14:29.994 --rc geninfo_all_blocks=1 00:14:29.994 --rc geninfo_unexecuted_blocks=1 00:14:29.994 00:14:29.994 ' 00:14:29.994 03:58:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:29.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.994 --rc genhtml_branch_coverage=1 00:14:29.994 --rc genhtml_function_coverage=1 00:14:29.994 --rc genhtml_legend=1 00:14:29.994 --rc geninfo_all_blocks=1 00:14:29.994 --rc geninfo_unexecuted_blocks=1 00:14:29.994 00:14:29.994 ' 00:14:29.994 03:58:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:29.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.994 --rc genhtml_branch_coverage=1 00:14:29.994 --rc genhtml_function_coverage=1 00:14:29.994 --rc genhtml_legend=1 00:14:29.994 --rc geninfo_all_blocks=1 00:14:29.994 --rc geninfo_unexecuted_blocks=1 00:14:29.994 00:14:29.994 ' 00:14:29.994 03:58:04 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:29.994 03:58:04 -- nvmf/common.sh@7 -- # uname -s 00:14:29.994 03:58:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:29.994 03:58:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:29.994 03:58:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:29.994 03:58:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:29.994 03:58:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:29.994 03:58:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:29.994 03:58:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:29.994 03:58:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:29.994 03:58:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:29.994 03:58:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:29.994 03:58:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:14:29.994 03:58:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:14:29.994 03:58:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:29.994 03:58:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:29.994 03:58:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:29.994 03:58:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:29.994 03:58:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:29.994 03:58:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:29.994 03:58:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:29.994 03:58:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.994 03:58:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.994 03:58:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.994 03:58:05 -- paths/export.sh@5 -- # export PATH 00:14:29.994 03:58:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.994 03:58:05 -- nvmf/common.sh@46 -- # : 0 00:14:29.994 03:58:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:29.994 03:58:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:29.994 03:58:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:29.994 03:58:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:29.994 03:58:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:29.994 03:58:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:29.994 03:58:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:29.994 03:58:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:29.994 03:58:05 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:29.994 03:58:05 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:29.994 03:58:05 -- target/host_management.sh@104 -- # nvmftestinit 00:14:29.994 03:58:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:29.994 03:58:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:29.994 03:58:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:29.994 03:58:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:29.994 03:58:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:29.994 03:58:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.994 03:58:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:29.994 03:58:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.994 03:58:05 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:29.994 03:58:05 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:29.994 03:58:05 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:29.994 03:58:05 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:29.994 03:58:05 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:29.994 03:58:05 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:29.994 03:58:05 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:29.994 03:58:05 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:29.994 03:58:05 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:29.994 03:58:05 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:29.995 03:58:05 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:29.995 03:58:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:29.995 03:58:05 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:29.995 03:58:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:29.995 03:58:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:29.995 03:58:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:29.995 03:58:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:29.995 03:58:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:29.995 03:58:05 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:29.995 03:58:05 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:29.995 Cannot find device "nvmf_tgt_br" 00:14:29.995 03:58:05 -- nvmf/common.sh@154 -- # true 00:14:29.995 03:58:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:29.995 Cannot find device "nvmf_tgt_br2" 00:14:29.995 03:58:05 -- nvmf/common.sh@155 -- # true 00:14:29.995 03:58:05 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:29.995 03:58:05 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:29.995 Cannot find device "nvmf_tgt_br" 00:14:29.995 03:58:05 -- nvmf/common.sh@157 -- # true 00:14:29.995 03:58:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:29.995 Cannot find device "nvmf_tgt_br2" 00:14:29.995 03:58:05 -- nvmf/common.sh@158 -- # true 00:14:29.995 03:58:05 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:30.253 03:58:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:30.253 03:58:05 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:30.253 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:30.253 03:58:05 -- nvmf/common.sh@161 -- # true 00:14:30.253 03:58:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:30.253 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:30.253 03:58:05 -- nvmf/common.sh@162 -- # true 00:14:30.253 03:58:05 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:30.253 03:58:05 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:30.253 03:58:05 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:30.253 03:58:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:30.253 03:58:05 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:30.253 03:58:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:30.253 03:58:05 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:30.253 03:58:05 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:30.253 03:58:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:30.253 03:58:05 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:30.253 03:58:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:30.253 03:58:05 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:30.253 03:58:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:30.253 03:58:05 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:30.253 03:58:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:30.253 03:58:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:30.253 03:58:05 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:30.253 03:58:05 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:30.253 03:58:05 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:30.253 03:58:05 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:30.253 03:58:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:30.253 03:58:05 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:30.253 03:58:05 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:30.253 03:58:05 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:30.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:30.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:14:30.253 00:14:30.253 --- 10.0.0.2 ping statistics --- 00:14:30.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.253 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:14:30.253 03:58:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:30.253 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:30.253 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:14:30.253 00:14:30.254 --- 10.0.0.3 ping statistics --- 00:14:30.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.254 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:14:30.254 03:58:05 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:30.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:30.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:30.254 00:14:30.254 --- 10.0.0.1 ping statistics --- 00:14:30.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.254 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:30.512 03:58:05 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:30.512 03:58:05 -- nvmf/common.sh@421 -- # return 0 00:14:30.512 03:58:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:30.512 03:58:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:30.512 03:58:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:30.512 03:58:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:30.512 03:58:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:30.512 03:58:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:30.512 03:58:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:30.512 03:58:05 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:14:30.512 03:58:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:30.512 03:58:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:30.512 03:58:05 -- common/autotest_common.sh@10 -- # set +x 00:14:30.512 ************************************ 00:14:30.512 START TEST nvmf_host_management 00:14:30.512 ************************************ 00:14:30.512 03:58:05 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:14:30.512 03:58:05 -- target/host_management.sh@69 -- # starttarget 00:14:30.512 03:58:05 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:30.512 03:58:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:30.512 03:58:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:30.512 03:58:05 -- common/autotest_common.sh@10 -- # set +x 00:14:30.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.512 03:58:05 -- nvmf/common.sh@469 -- # nvmfpid=72132 00:14:30.512 03:58:05 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:30.512 03:58:05 -- nvmf/common.sh@470 -- # waitforlisten 72132 00:14:30.512 03:58:05 -- common/autotest_common.sh@829 -- # '[' -z 72132 ']' 00:14:30.512 03:58:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.512 03:58:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:30.512 03:58:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.512 03:58:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:30.512 03:58:05 -- common/autotest_common.sh@10 -- # set +x 00:14:30.512 [2024-11-08 03:58:05.457997] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:30.512 [2024-11-08 03:58:05.458103] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.512 [2024-11-08 03:58:05.599863] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:30.770 [2024-11-08 03:58:05.724162] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:30.770 [2024-11-08 03:58:05.724607] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.770 [2024-11-08 03:58:05.724738] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.770 [2024-11-08 03:58:05.724874] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.770 [2024-11-08 03:58:05.725086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:30.770 [2024-11-08 03:58:05.726606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:30.770 [2024-11-08 03:58:05.726748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:30.770 [2024-11-08 03:58:05.726753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.336 03:58:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:31.336 03:58:06 -- common/autotest_common.sh@862 -- # return 0 00:14:31.336 03:58:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:31.336 03:58:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:31.336 03:58:06 -- common/autotest_common.sh@10 -- # set +x 00:14:31.594 03:58:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:31.594 03:58:06 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:31.594 03:58:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.594 03:58:06 -- common/autotest_common.sh@10 -- # set +x 00:14:31.594 [2024-11-08 03:58:06.478573] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:31.594 03:58:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.594 03:58:06 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:31.594 03:58:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:31.594 03:58:06 -- common/autotest_common.sh@10 -- # set +x 00:14:31.594 03:58:06 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:31.594 03:58:06 -- target/host_management.sh@23 -- # cat 00:14:31.594 03:58:06 -- target/host_management.sh@30 -- # rpc_cmd 00:14:31.594 03:58:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.594 03:58:06 -- common/autotest_common.sh@10 -- # set +x 00:14:31.594 Malloc0 00:14:31.594 [2024-11-08 03:58:06.573891] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:31.594 03:58:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.594 03:58:06 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:31.594 03:58:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:31.594 03:58:06 -- common/autotest_common.sh@10 -- # set +x 00:14:31.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:31.594 03:58:06 -- target/host_management.sh@73 -- # perfpid=72211 00:14:31.594 03:58:06 -- target/host_management.sh@74 -- # waitforlisten 72211 /var/tmp/bdevperf.sock 00:14:31.594 03:58:06 -- common/autotest_common.sh@829 -- # '[' -z 72211 ']' 00:14:31.594 03:58:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:31.594 03:58:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:31.594 03:58:06 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:31.594 03:58:06 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:31.594 03:58:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:31.594 03:58:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:31.594 03:58:06 -- nvmf/common.sh@520 -- # config=() 00:14:31.594 03:58:06 -- common/autotest_common.sh@10 -- # set +x 00:14:31.594 03:58:06 -- nvmf/common.sh@520 -- # local subsystem config 00:14:31.594 03:58:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:31.594 03:58:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:31.594 { 00:14:31.594 "params": { 00:14:31.594 "name": "Nvme$subsystem", 00:14:31.594 "trtype": "$TEST_TRANSPORT", 00:14:31.594 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:31.594 "adrfam": "ipv4", 00:14:31.594 "trsvcid": "$NVMF_PORT", 00:14:31.594 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:31.594 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:31.594 "hdgst": ${hdgst:-false}, 00:14:31.594 "ddgst": ${ddgst:-false} 00:14:31.594 }, 00:14:31.594 "method": "bdev_nvme_attach_controller" 00:14:31.594 } 00:14:31.594 EOF 00:14:31.594 )") 00:14:31.594 03:58:06 -- nvmf/common.sh@542 -- # cat 00:14:31.595 03:58:06 -- nvmf/common.sh@544 -- # jq . 00:14:31.595 03:58:06 -- nvmf/common.sh@545 -- # IFS=, 00:14:31.595 03:58:06 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:31.595 "params": { 00:14:31.595 "name": "Nvme0", 00:14:31.595 "trtype": "tcp", 00:14:31.595 "traddr": "10.0.0.2", 00:14:31.595 "adrfam": "ipv4", 00:14:31.595 "trsvcid": "4420", 00:14:31.595 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:31.595 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:31.595 "hdgst": false, 00:14:31.595 "ddgst": false 00:14:31.595 }, 00:14:31.595 "method": "bdev_nvme_attach_controller" 00:14:31.595 }' 00:14:31.595 [2024-11-08 03:58:06.678727] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:31.595 [2024-11-08 03:58:06.678832] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72211 ] 00:14:31.853 [2024-11-08 03:58:06.817439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.111 [2024-11-08 03:58:06.972950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.111 Running I/O for 10 seconds... 00:14:32.679 03:58:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:32.679 03:58:07 -- common/autotest_common.sh@862 -- # return 0 00:14:32.679 03:58:07 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:32.679 03:58:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.679 03:58:07 -- common/autotest_common.sh@10 -- # set +x 00:14:32.679 03:58:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.679 03:58:07 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:32.679 03:58:07 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:32.679 03:58:07 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:32.679 03:58:07 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:32.679 03:58:07 -- target/host_management.sh@52 -- # local ret=1 00:14:32.679 03:58:07 -- target/host_management.sh@53 -- # local i 00:14:32.679 03:58:07 -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:32.679 03:58:07 -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:32.679 03:58:07 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:32.679 03:58:07 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:32.679 03:58:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.679 03:58:07 -- common/autotest_common.sh@10 -- # set +x 00:14:32.679 03:58:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.679 03:58:07 -- target/host_management.sh@55 -- # read_io_count=1757 00:14:32.679 03:58:07 -- target/host_management.sh@58 -- # '[' 1757 -ge 100 ']' 00:14:32.679 03:58:07 -- target/host_management.sh@59 -- # ret=0 00:14:32.679 03:58:07 -- target/host_management.sh@60 -- # break 00:14:32.679 03:58:07 -- target/host_management.sh@64 -- # return 0 00:14:32.679 03:58:07 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:32.679 03:58:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.679 03:58:07 -- common/autotest_common.sh@10 -- # set +x 00:14:32.679 [2024-11-08 03:58:07.775846] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.775926] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.775939] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.775949] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.775959] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.775968] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.775977] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.775986] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.775995] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.776004] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.776012] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.776021] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.776029] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.776037] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.776047] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.776056] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.776064] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.776073] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.776081] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.776090] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.776099] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.776107] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.776115] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.776123] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.776131] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.776139] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.776147] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.776154] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.776162] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.776172] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.776181] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.776189] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.776206] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.776214] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.776222] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.776231] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.776239] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.776248] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.776255] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.776263] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.776271] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.776278] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.776286] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.776294] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.776303] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71910 is same with the state(5) to be set 00:14:32.679 [2024-11-08 03:58:07.776467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.679 [2024-11-08 03:58:07.776509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.679 [2024-11-08 03:58:07.776549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.679 [2024-11-08 03:58:07.776560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.679 [2024-11-08 03:58:07.776573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.679 [2024-11-08 03:58:07.776583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.776595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.776604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.776616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.776625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.776637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.776647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.776659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.776668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.776680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.776690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.776701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.776711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.776723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.776732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.776744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.776753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.776764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.776773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.776784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.776793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.776805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.776821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.776833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.776842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.776854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.776864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.776875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.776887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.776898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.776908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.776920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.776929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.776940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.776949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.776961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.776970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.776982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.776990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.777001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.777010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.777021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.777030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.777041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.777050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.777062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.777071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.777083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.777092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.777102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.777111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.777123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.777132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.777143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.777156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.777168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.777177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.777189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.777199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.777210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.777219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.777230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.777239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.777250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.777274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.777285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.777293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.777304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.777314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.777325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.777334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.777345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.777353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.777364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.777373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.777384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.777393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.777403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.680 [2024-11-08 03:58:07.777413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.680 [2024-11-08 03:58:07.777424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.681 [2024-11-08 03:58:07.777433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.681 [2024-11-08 03:58:07.777456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.681 [2024-11-08 03:58:07.777476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.681 [2024-11-08 03:58:07.777490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.681 [2024-11-08 03:58:07.777500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.681 [2024-11-08 03:58:07.777512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.681 [2024-11-08 03:58:07.777524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.681 [2024-11-08 03:58:07.777536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.681 [2024-11-08 03:58:07.777546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.681 [2024-11-08 03:58:07.777557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.681 [2024-11-08 03:58:07.777566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.681 [2024-11-08 03:58:07.777577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.681 [2024-11-08 03:58:07.777585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.681 [2024-11-08 03:58:07.777596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.681 [2024-11-08 03:58:07.777605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.681 [2024-11-08 03:58:07.777616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.681 [2024-11-08 03:58:07.777624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.681 [2024-11-08 03:58:07.777635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.681 [2024-11-08 03:58:07.777644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.681 [2024-11-08 03:58:07.777655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.681 [2024-11-08 03:58:07.777664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.681 [2024-11-08 03:58:07.777675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.681 [2024-11-08 03:58:07.777683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.681 [2024-11-08 03:58:07.777694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.681 [2024-11-08 03:58:07.777711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.681 [2024-11-08 03:58:07.777722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.681 [2024-11-08 03:58:07.777731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.681 [2024-11-08 03:58:07.777742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.681 [2024-11-08 03:58:07.777751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.681 [2024-11-08 03:58:07.777762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.681 [2024-11-08 03:58:07.777770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.681 [2024-11-08 03:58:07.777781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.681 [2024-11-08 03:58:07.777789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.681 [2024-11-08 03:58:07.777800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.681 [2024-11-08 03:58:07.777808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.681 [2024-11-08 03:58:07.777819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.681 [2024-11-08 03:58:07.777827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.681 [2024-11-08 03:58:07.777838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.681 [2024-11-08 03:58:07.777850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.681 [2024-11-08 03:58:07.777861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.681 [2024-11-08 03:58:07.777870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.681 [2024-11-08 03:58:07.777880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.681 [2024-11-08 03:58:07.777889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.681 [2024-11-08 03:58:07.777900] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9d400 is same with the state(5) to be set 00:14:32.681 [2024-11-08 03:58:07.777985] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf9d400 was disconnected and freed. reset controller. 00:14:32.681 [2024-11-08 03:58:07.779147] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:32.681 03:58:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.681 03:58:07 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:32.681 03:58:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.681 task offset: 110336 on job bdev=Nvme0n1 fails 00:14:32.681 00:14:32.681 Latency(us) 00:14:32.681 [2024-11-08T03:58:07.792Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.681 [2024-11-08T03:58:07.792Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:32.681 [2024-11-08T03:58:07.792Z] Job: Nvme0n1 ended in about 0.59 seconds with error 00:14:32.681 Verification LBA range: start 0x0 length 0x400 00:14:32.681 Nvme0n1 : 0.59 3223.26 201.45 108.12 0.00 18873.84 3991.74 24188.74 00:14:32.681 [2024-11-08T03:58:07.792Z] =================================================================================================================== 00:14:32.681 [2024-11-08T03:58:07.792Z] Total : 3223.26 201.45 108.12 0.00 18873.84 3991.74 24188.74 00:14:32.681 03:58:07 -- common/autotest_common.sh@10 -- # set +x 00:14:32.681 [2024-11-08 03:58:07.781166] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:32.681 [2024-11-08 03:58:07.781197] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc9dc0 (9): Bad file descriptor 00:14:32.940 [2024-11-08 03:58:07.785878] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:32.940 03:58:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.940 03:58:07 -- target/host_management.sh@87 -- # sleep 1 00:14:33.873 03:58:08 -- target/host_management.sh@91 -- # kill -9 72211 00:14:33.873 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (72211) - No such process 00:14:33.873 03:58:08 -- target/host_management.sh@91 -- # true 00:14:33.873 03:58:08 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:33.873 03:58:08 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:33.873 03:58:08 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:33.873 03:58:08 -- nvmf/common.sh@520 -- # config=() 00:14:33.873 03:58:08 -- nvmf/common.sh@520 -- # local subsystem config 00:14:33.873 03:58:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:33.873 03:58:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:33.873 { 00:14:33.873 "params": { 00:14:33.873 "name": "Nvme$subsystem", 00:14:33.873 "trtype": "$TEST_TRANSPORT", 00:14:33.873 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:33.873 "adrfam": "ipv4", 00:14:33.873 "trsvcid": "$NVMF_PORT", 00:14:33.873 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:33.873 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:33.873 "hdgst": ${hdgst:-false}, 00:14:33.873 "ddgst": ${ddgst:-false} 00:14:33.873 }, 00:14:33.873 "method": "bdev_nvme_attach_controller" 00:14:33.873 } 00:14:33.873 EOF 00:14:33.873 )") 00:14:33.873 03:58:08 -- nvmf/common.sh@542 -- # cat 00:14:33.873 03:58:08 -- nvmf/common.sh@544 -- # jq . 00:14:33.873 03:58:08 -- nvmf/common.sh@545 -- # IFS=, 00:14:33.873 03:58:08 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:33.873 "params": { 00:14:33.873 "name": "Nvme0", 00:14:33.873 "trtype": "tcp", 00:14:33.873 "traddr": "10.0.0.2", 00:14:33.873 "adrfam": "ipv4", 00:14:33.873 "trsvcid": "4420", 00:14:33.873 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:33.873 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:33.873 "hdgst": false, 00:14:33.873 "ddgst": false 00:14:33.873 }, 00:14:33.873 "method": "bdev_nvme_attach_controller" 00:14:33.873 }' 00:14:33.873 [2024-11-08 03:58:08.853332] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:33.873 [2024-11-08 03:58:08.853529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72260 ] 00:14:34.131 [2024-11-08 03:58:08.995325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.131 [2024-11-08 03:58:09.144234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.390 Running I/O for 1 seconds... 00:14:35.323 00:14:35.323 Latency(us) 00:14:35.323 [2024-11-08T03:58:10.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.323 [2024-11-08T03:58:10.434Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:35.323 Verification LBA range: start 0x0 length 0x400 00:14:35.323 Nvme0n1 : 1.01 2934.26 183.39 0.00 0.00 21413.99 1437.32 26571.87 00:14:35.323 [2024-11-08T03:58:10.434Z] =================================================================================================================== 00:14:35.323 [2024-11-08T03:58:10.434Z] Total : 2934.26 183.39 0.00 0.00 21413.99 1437.32 26571.87 00:14:35.889 03:58:10 -- target/host_management.sh@101 -- # stoptarget 00:14:35.889 03:58:10 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:35.889 03:58:10 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:14:35.889 03:58:10 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:35.889 03:58:10 -- target/host_management.sh@40 -- # nvmftestfini 00:14:35.889 03:58:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:35.889 03:58:10 -- nvmf/common.sh@116 -- # sync 00:14:35.889 03:58:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:35.889 03:58:10 -- nvmf/common.sh@119 -- # set +e 00:14:35.889 03:58:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:35.889 03:58:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:35.889 rmmod nvme_tcp 00:14:35.889 rmmod nvme_fabrics 00:14:35.889 rmmod nvme_keyring 00:14:35.889 03:58:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:35.889 03:58:10 -- nvmf/common.sh@123 -- # set -e 00:14:35.889 03:58:10 -- nvmf/common.sh@124 -- # return 0 00:14:35.889 03:58:10 -- nvmf/common.sh@477 -- # '[' -n 72132 ']' 00:14:35.889 03:58:10 -- nvmf/common.sh@478 -- # killprocess 72132 00:14:35.889 03:58:10 -- common/autotest_common.sh@936 -- # '[' -z 72132 ']' 00:14:35.889 03:58:10 -- common/autotest_common.sh@940 -- # kill -0 72132 00:14:35.889 03:58:10 -- common/autotest_common.sh@941 -- # uname 00:14:35.889 03:58:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:35.889 03:58:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72132 00:14:35.889 03:58:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:35.889 03:58:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:35.889 killing process with pid 72132 00:14:35.889 03:58:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72132' 00:14:35.889 03:58:10 -- common/autotest_common.sh@955 -- # kill 72132 00:14:35.889 03:58:10 -- common/autotest_common.sh@960 -- # wait 72132 00:14:36.147 [2024-11-08 03:58:11.239637] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:36.406 03:58:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:36.406 03:58:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:36.406 03:58:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:36.406 03:58:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:36.406 03:58:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:36.406 03:58:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.406 03:58:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:36.406 03:58:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.406 03:58:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:36.406 00:14:36.406 real 0m5.919s 00:14:36.406 user 0m24.589s 00:14:36.406 sys 0m1.407s 00:14:36.406 03:58:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:36.406 03:58:11 -- common/autotest_common.sh@10 -- # set +x 00:14:36.406 ************************************ 00:14:36.406 END TEST nvmf_host_management 00:14:36.406 ************************************ 00:14:36.406 03:58:11 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:14:36.406 ************************************ 00:14:36.406 END TEST nvmf_host_management 00:14:36.406 ************************************ 00:14:36.406 00:14:36.406 real 0m6.546s 00:14:36.406 user 0m24.789s 00:14:36.406 sys 0m1.687s 00:14:36.406 03:58:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:36.406 03:58:11 -- common/autotest_common.sh@10 -- # set +x 00:14:36.406 03:58:11 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:36.406 03:58:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:36.406 03:58:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:36.406 03:58:11 -- common/autotest_common.sh@10 -- # set +x 00:14:36.406 ************************************ 00:14:36.406 START TEST nvmf_lvol 00:14:36.406 ************************************ 00:14:36.406 03:58:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:36.406 * Looking for test storage... 00:14:36.406 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:36.406 03:58:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:36.406 03:58:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:36.406 03:58:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:36.681 03:58:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:36.681 03:58:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:36.681 03:58:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:36.681 03:58:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:36.681 03:58:11 -- scripts/common.sh@335 -- # IFS=.-: 00:14:36.681 03:58:11 -- scripts/common.sh@335 -- # read -ra ver1 00:14:36.681 03:58:11 -- scripts/common.sh@336 -- # IFS=.-: 00:14:36.681 03:58:11 -- scripts/common.sh@336 -- # read -ra ver2 00:14:36.681 03:58:11 -- scripts/common.sh@337 -- # local 'op=<' 00:14:36.681 03:58:11 -- scripts/common.sh@339 -- # ver1_l=2 00:14:36.681 03:58:11 -- scripts/common.sh@340 -- # ver2_l=1 00:14:36.681 03:58:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:36.681 03:58:11 -- scripts/common.sh@343 -- # case "$op" in 00:14:36.681 03:58:11 -- scripts/common.sh@344 -- # : 1 00:14:36.681 03:58:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:36.681 03:58:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:36.681 03:58:11 -- scripts/common.sh@364 -- # decimal 1 00:14:36.681 03:58:11 -- scripts/common.sh@352 -- # local d=1 00:14:36.681 03:58:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:36.681 03:58:11 -- scripts/common.sh@354 -- # echo 1 00:14:36.681 03:58:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:36.681 03:58:11 -- scripts/common.sh@365 -- # decimal 2 00:14:36.681 03:58:11 -- scripts/common.sh@352 -- # local d=2 00:14:36.681 03:58:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:36.681 03:58:11 -- scripts/common.sh@354 -- # echo 2 00:14:36.681 03:58:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:36.681 03:58:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:36.681 03:58:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:36.681 03:58:11 -- scripts/common.sh@367 -- # return 0 00:14:36.681 03:58:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:36.681 03:58:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:36.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.681 --rc genhtml_branch_coverage=1 00:14:36.681 --rc genhtml_function_coverage=1 00:14:36.681 --rc genhtml_legend=1 00:14:36.681 --rc geninfo_all_blocks=1 00:14:36.681 --rc geninfo_unexecuted_blocks=1 00:14:36.681 00:14:36.681 ' 00:14:36.681 03:58:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:36.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.681 --rc genhtml_branch_coverage=1 00:14:36.681 --rc genhtml_function_coverage=1 00:14:36.681 --rc genhtml_legend=1 00:14:36.681 --rc geninfo_all_blocks=1 00:14:36.681 --rc geninfo_unexecuted_blocks=1 00:14:36.681 00:14:36.681 ' 00:14:36.681 03:58:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:36.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.681 --rc genhtml_branch_coverage=1 00:14:36.681 --rc genhtml_function_coverage=1 00:14:36.681 --rc genhtml_legend=1 00:14:36.681 --rc geninfo_all_blocks=1 00:14:36.682 --rc geninfo_unexecuted_blocks=1 00:14:36.682 00:14:36.682 ' 00:14:36.682 03:58:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:36.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.682 --rc genhtml_branch_coverage=1 00:14:36.682 --rc genhtml_function_coverage=1 00:14:36.682 --rc genhtml_legend=1 00:14:36.682 --rc geninfo_all_blocks=1 00:14:36.682 --rc geninfo_unexecuted_blocks=1 00:14:36.682 00:14:36.682 ' 00:14:36.682 03:58:11 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:36.682 03:58:11 -- nvmf/common.sh@7 -- # uname -s 00:14:36.682 03:58:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:36.682 03:58:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:36.682 03:58:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:36.682 03:58:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:36.682 03:58:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:36.682 03:58:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:36.682 03:58:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:36.682 03:58:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:36.682 03:58:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:36.682 03:58:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:36.682 03:58:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:14:36.682 03:58:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:14:36.682 03:58:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:36.682 03:58:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:36.682 03:58:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:36.682 03:58:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:36.682 03:58:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:36.682 03:58:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:36.682 03:58:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:36.682 03:58:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.682 03:58:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.682 03:58:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.682 03:58:11 -- paths/export.sh@5 -- # export PATH 00:14:36.682 03:58:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.682 03:58:11 -- nvmf/common.sh@46 -- # : 0 00:14:36.682 03:58:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:36.682 03:58:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:36.682 03:58:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:36.682 03:58:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:36.682 03:58:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:36.682 03:58:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:36.682 03:58:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:36.682 03:58:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:36.682 03:58:11 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:36.682 03:58:11 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:36.682 03:58:11 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:36.682 03:58:11 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:36.682 03:58:11 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:36.682 03:58:11 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:36.682 03:58:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:36.682 03:58:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:36.682 03:58:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:36.682 03:58:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:36.682 03:58:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:36.682 03:58:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.682 03:58:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:36.682 03:58:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.682 03:58:11 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:36.682 03:58:11 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:36.682 03:58:11 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:36.682 03:58:11 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:36.682 03:58:11 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:36.682 03:58:11 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:36.682 03:58:11 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:36.682 03:58:11 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:36.682 03:58:11 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:36.682 03:58:11 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:36.682 03:58:11 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:36.682 03:58:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:36.682 03:58:11 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:36.682 03:58:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:36.682 03:58:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:36.682 03:58:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:36.682 03:58:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:36.682 03:58:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:36.682 03:58:11 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:36.682 03:58:11 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:36.682 Cannot find device "nvmf_tgt_br" 00:14:36.682 03:58:11 -- nvmf/common.sh@154 -- # true 00:14:36.682 03:58:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:36.682 Cannot find device "nvmf_tgt_br2" 00:14:36.683 03:58:11 -- nvmf/common.sh@155 -- # true 00:14:36.683 03:58:11 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:36.683 03:58:11 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:36.683 Cannot find device "nvmf_tgt_br" 00:14:36.683 03:58:11 -- nvmf/common.sh@157 -- # true 00:14:36.683 03:58:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:36.683 Cannot find device "nvmf_tgt_br2" 00:14:36.683 03:58:11 -- nvmf/common.sh@158 -- # true 00:14:36.683 03:58:11 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:36.683 03:58:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:36.683 03:58:11 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:36.683 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:36.683 03:58:11 -- nvmf/common.sh@161 -- # true 00:14:36.683 03:58:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:36.683 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:36.683 03:58:11 -- nvmf/common.sh@162 -- # true 00:14:36.683 03:58:11 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:36.962 03:58:11 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:36.962 03:58:11 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:36.962 03:58:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:36.962 03:58:11 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:36.962 03:58:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:36.962 03:58:11 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:36.962 03:58:11 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:36.962 03:58:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:36.962 03:58:11 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:36.962 03:58:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:36.962 03:58:11 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:36.962 03:58:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:36.962 03:58:11 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:36.962 03:58:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:36.962 03:58:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:36.962 03:58:11 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:36.962 03:58:11 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:36.962 03:58:11 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:36.962 03:58:11 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:36.962 03:58:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:36.962 03:58:11 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:36.962 03:58:11 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:36.962 03:58:11 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:36.962 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:36.962 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:14:36.962 00:14:36.962 --- 10.0.0.2 ping statistics --- 00:14:36.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.962 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:14:36.962 03:58:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:36.962 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:36.962 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:14:36.962 00:14:36.962 --- 10.0.0.3 ping statistics --- 00:14:36.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.962 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:14:36.962 03:58:11 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:36.962 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:36.962 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:14:36.962 00:14:36.962 --- 10.0.0.1 ping statistics --- 00:14:36.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.962 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:14:36.962 03:58:11 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:36.962 03:58:11 -- nvmf/common.sh@421 -- # return 0 00:14:36.962 03:58:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:36.962 03:58:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:36.962 03:58:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:36.962 03:58:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:36.962 03:58:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:36.962 03:58:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:36.962 03:58:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:36.962 03:58:11 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:36.962 03:58:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:36.962 03:58:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:36.962 03:58:11 -- common/autotest_common.sh@10 -- # set +x 00:14:36.962 03:58:11 -- nvmf/common.sh@469 -- # nvmfpid=72501 00:14:36.962 03:58:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:36.962 03:58:11 -- nvmf/common.sh@470 -- # waitforlisten 72501 00:14:36.962 03:58:11 -- common/autotest_common.sh@829 -- # '[' -z 72501 ']' 00:14:36.962 03:58:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.962 03:58:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:36.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.962 03:58:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.962 03:58:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:36.962 03:58:11 -- common/autotest_common.sh@10 -- # set +x 00:14:36.962 [2024-11-08 03:58:12.024852] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:36.962 [2024-11-08 03:58:12.024965] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:37.221 [2024-11-08 03:58:12.160847] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:37.221 [2024-11-08 03:58:12.299760] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:37.221 [2024-11-08 03:58:12.300100] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:37.221 [2024-11-08 03:58:12.300196] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:37.221 [2024-11-08 03:58:12.300290] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:37.221 [2024-11-08 03:58:12.300579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.221 [2024-11-08 03:58:12.300722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:37.221 [2024-11-08 03:58:12.300727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.157 03:58:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:38.157 03:58:12 -- common/autotest_common.sh@862 -- # return 0 00:14:38.157 03:58:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:38.157 03:58:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:38.157 03:58:12 -- common/autotest_common.sh@10 -- # set +x 00:14:38.157 03:58:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:38.157 03:58:12 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:38.157 [2024-11-08 03:58:13.259161] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:38.415 03:58:13 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:38.673 03:58:13 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:38.674 03:58:13 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:38.932 03:58:13 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:38.932 03:58:13 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:39.190 03:58:14 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:39.449 03:58:14 -- target/nvmf_lvol.sh@29 -- # lvs=1af967fc-12ff-4fa8-8425-6e6c2d74e3f3 00:14:39.449 03:58:14 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1af967fc-12ff-4fa8-8425-6e6c2d74e3f3 lvol 20 00:14:39.708 03:58:14 -- target/nvmf_lvol.sh@32 -- # lvol=9b1a7386-5620-4abd-9235-fa2f077ed523 00:14:39.708 03:58:14 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:39.966 03:58:15 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9b1a7386-5620-4abd-9235-fa2f077ed523 00:14:40.225 03:58:15 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:40.483 [2024-11-08 03:58:15.499049] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:40.483 03:58:15 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:40.742 03:58:15 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:40.742 03:58:15 -- target/nvmf_lvol.sh@42 -- # perf_pid=72649 00:14:40.742 03:58:15 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:41.676 03:58:16 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 9b1a7386-5620-4abd-9235-fa2f077ed523 MY_SNAPSHOT 00:14:41.935 03:58:17 -- target/nvmf_lvol.sh@47 -- # snapshot=2c4ab335-564b-4ee0-8021-c2127602adfa 00:14:41.935 03:58:17 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 9b1a7386-5620-4abd-9235-fa2f077ed523 30 00:14:42.193 03:58:17 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 2c4ab335-564b-4ee0-8021-c2127602adfa MY_CLONE 00:14:42.451 03:58:17 -- target/nvmf_lvol.sh@49 -- # clone=82bf0a09-a0b5-4735-85fc-dde679756c51 00:14:42.451 03:58:17 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 82bf0a09-a0b5-4735-85fc-dde679756c51 00:14:43.018 03:58:18 -- target/nvmf_lvol.sh@53 -- # wait 72649 00:14:51.141 Initializing NVMe Controllers 00:14:51.141 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:51.141 Controller IO queue size 128, less than required. 00:14:51.141 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:51.141 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:51.141 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:51.141 Initialization complete. Launching workers. 00:14:51.141 ======================================================== 00:14:51.141 Latency(us) 00:14:51.141 Device Information : IOPS MiB/s Average min max 00:14:51.141 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11587.29 45.26 11046.75 1781.08 47861.53 00:14:51.141 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11607.99 45.34 11030.52 2586.33 96526.70 00:14:51.141 ======================================================== 00:14:51.141 Total : 23195.27 90.61 11038.63 1781.08 96526.70 00:14:51.141 00:14:51.141 03:58:26 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:51.400 03:58:26 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 9b1a7386-5620-4abd-9235-fa2f077ed523 00:14:51.658 03:58:26 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1af967fc-12ff-4fa8-8425-6e6c2d74e3f3 00:14:51.917 03:58:26 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:51.917 03:58:26 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:51.917 03:58:26 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:51.917 03:58:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:51.917 03:58:26 -- nvmf/common.sh@116 -- # sync 00:14:51.917 03:58:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:51.917 03:58:26 -- nvmf/common.sh@119 -- # set +e 00:14:51.917 03:58:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:51.917 03:58:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:51.917 rmmod nvme_tcp 00:14:51.917 rmmod nvme_fabrics 00:14:51.917 rmmod nvme_keyring 00:14:51.917 03:58:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:51.917 03:58:26 -- nvmf/common.sh@123 -- # set -e 00:14:51.917 03:58:26 -- nvmf/common.sh@124 -- # return 0 00:14:51.917 03:58:26 -- nvmf/common.sh@477 -- # '[' -n 72501 ']' 00:14:51.917 03:58:26 -- nvmf/common.sh@478 -- # killprocess 72501 00:14:51.917 03:58:26 -- common/autotest_common.sh@936 -- # '[' -z 72501 ']' 00:14:51.917 03:58:26 -- common/autotest_common.sh@940 -- # kill -0 72501 00:14:51.917 03:58:26 -- common/autotest_common.sh@941 -- # uname 00:14:51.917 03:58:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:51.917 03:58:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72501 00:14:51.917 killing process with pid 72501 00:14:51.917 03:58:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:51.917 03:58:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:51.917 03:58:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72501' 00:14:51.917 03:58:26 -- common/autotest_common.sh@955 -- # kill 72501 00:14:51.917 03:58:26 -- common/autotest_common.sh@960 -- # wait 72501 00:14:52.485 03:58:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:52.485 03:58:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:52.485 03:58:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:52.485 03:58:27 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:52.485 03:58:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:52.485 03:58:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.485 03:58:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:52.485 03:58:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.485 03:58:27 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:52.485 00:14:52.485 real 0m15.923s 00:14:52.485 user 1m5.469s 00:14:52.485 sys 0m4.200s 00:14:52.485 03:58:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:52.485 03:58:27 -- common/autotest_common.sh@10 -- # set +x 00:14:52.485 ************************************ 00:14:52.485 END TEST nvmf_lvol 00:14:52.485 ************************************ 00:14:52.485 03:58:27 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:52.485 03:58:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:52.485 03:58:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:52.485 03:58:27 -- common/autotest_common.sh@10 -- # set +x 00:14:52.485 ************************************ 00:14:52.485 START TEST nvmf_lvs_grow 00:14:52.485 ************************************ 00:14:52.485 03:58:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:52.485 * Looking for test storage... 00:14:52.485 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:52.485 03:58:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:52.485 03:58:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:52.485 03:58:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:52.485 03:58:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:52.485 03:58:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:52.485 03:58:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:52.485 03:58:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:52.485 03:58:27 -- scripts/common.sh@335 -- # IFS=.-: 00:14:52.485 03:58:27 -- scripts/common.sh@335 -- # read -ra ver1 00:14:52.485 03:58:27 -- scripts/common.sh@336 -- # IFS=.-: 00:14:52.485 03:58:27 -- scripts/common.sh@336 -- # read -ra ver2 00:14:52.485 03:58:27 -- scripts/common.sh@337 -- # local 'op=<' 00:14:52.485 03:58:27 -- scripts/common.sh@339 -- # ver1_l=2 00:14:52.485 03:58:27 -- scripts/common.sh@340 -- # ver2_l=1 00:14:52.485 03:58:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:52.485 03:58:27 -- scripts/common.sh@343 -- # case "$op" in 00:14:52.485 03:58:27 -- scripts/common.sh@344 -- # : 1 00:14:52.485 03:58:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:52.485 03:58:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:52.485 03:58:27 -- scripts/common.sh@364 -- # decimal 1 00:14:52.485 03:58:27 -- scripts/common.sh@352 -- # local d=1 00:14:52.485 03:58:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:52.485 03:58:27 -- scripts/common.sh@354 -- # echo 1 00:14:52.485 03:58:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:52.485 03:58:27 -- scripts/common.sh@365 -- # decimal 2 00:14:52.485 03:58:27 -- scripts/common.sh@352 -- # local d=2 00:14:52.485 03:58:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:52.485 03:58:27 -- scripts/common.sh@354 -- # echo 2 00:14:52.485 03:58:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:52.485 03:58:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:52.485 03:58:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:52.485 03:58:27 -- scripts/common.sh@367 -- # return 0 00:14:52.485 03:58:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:52.486 03:58:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:52.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.486 --rc genhtml_branch_coverage=1 00:14:52.486 --rc genhtml_function_coverage=1 00:14:52.486 --rc genhtml_legend=1 00:14:52.486 --rc geninfo_all_blocks=1 00:14:52.486 --rc geninfo_unexecuted_blocks=1 00:14:52.486 00:14:52.486 ' 00:14:52.486 03:58:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:52.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.486 --rc genhtml_branch_coverage=1 00:14:52.486 --rc genhtml_function_coverage=1 00:14:52.486 --rc genhtml_legend=1 00:14:52.486 --rc geninfo_all_blocks=1 00:14:52.486 --rc geninfo_unexecuted_blocks=1 00:14:52.486 00:14:52.486 ' 00:14:52.486 03:58:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:52.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.486 --rc genhtml_branch_coverage=1 00:14:52.486 --rc genhtml_function_coverage=1 00:14:52.486 --rc genhtml_legend=1 00:14:52.486 --rc geninfo_all_blocks=1 00:14:52.486 --rc geninfo_unexecuted_blocks=1 00:14:52.486 00:14:52.486 ' 00:14:52.486 03:58:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:52.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.486 --rc genhtml_branch_coverage=1 00:14:52.486 --rc genhtml_function_coverage=1 00:14:52.486 --rc genhtml_legend=1 00:14:52.486 --rc geninfo_all_blocks=1 00:14:52.486 --rc geninfo_unexecuted_blocks=1 00:14:52.486 00:14:52.486 ' 00:14:52.486 03:58:27 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:52.486 03:58:27 -- nvmf/common.sh@7 -- # uname -s 00:14:52.486 03:58:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.486 03:58:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.486 03:58:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.486 03:58:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.486 03:58:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.486 03:58:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.486 03:58:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.486 03:58:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.486 03:58:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.486 03:58:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.486 03:58:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:14:52.486 03:58:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:14:52.486 03:58:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.486 03:58:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.486 03:58:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:52.486 03:58:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:52.486 03:58:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.486 03:58:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.486 03:58:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.486 03:58:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.486 03:58:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.486 03:58:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.486 03:58:27 -- paths/export.sh@5 -- # export PATH 00:14:52.486 03:58:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.486 03:58:27 -- nvmf/common.sh@46 -- # : 0 00:14:52.486 03:58:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:52.486 03:58:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:52.486 03:58:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:52.486 03:58:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.486 03:58:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.486 03:58:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:52.486 03:58:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:52.486 03:58:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:52.486 03:58:27 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:52.486 03:58:27 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:52.486 03:58:27 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:52.486 03:58:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:52.486 03:58:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:52.486 03:58:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:52.486 03:58:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:52.486 03:58:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:52.486 03:58:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.486 03:58:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:52.486 03:58:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.486 03:58:27 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:52.486 03:58:27 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:52.486 03:58:27 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:52.486 03:58:27 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:52.486 03:58:27 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:52.486 03:58:27 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:52.486 03:58:27 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:52.486 03:58:27 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:52.486 03:58:27 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:52.486 03:58:27 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:52.486 03:58:27 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:52.486 03:58:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:52.486 03:58:27 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:52.486 03:58:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:52.486 03:58:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:52.486 03:58:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:52.486 03:58:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:52.486 03:58:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:52.486 03:58:27 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:52.486 03:58:27 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:52.745 Cannot find device "nvmf_tgt_br" 00:14:52.745 03:58:27 -- nvmf/common.sh@154 -- # true 00:14:52.745 03:58:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:52.745 Cannot find device "nvmf_tgt_br2" 00:14:52.745 03:58:27 -- nvmf/common.sh@155 -- # true 00:14:52.745 03:58:27 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:52.745 03:58:27 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:52.745 Cannot find device "nvmf_tgt_br" 00:14:52.745 03:58:27 -- nvmf/common.sh@157 -- # true 00:14:52.745 03:58:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:52.745 Cannot find device "nvmf_tgt_br2" 00:14:52.745 03:58:27 -- nvmf/common.sh@158 -- # true 00:14:52.745 03:58:27 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:52.745 03:58:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:52.745 03:58:27 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:52.745 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:52.745 03:58:27 -- nvmf/common.sh@161 -- # true 00:14:52.745 03:58:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:52.745 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:52.745 03:58:27 -- nvmf/common.sh@162 -- # true 00:14:52.745 03:58:27 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:52.745 03:58:27 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:52.745 03:58:27 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:52.745 03:58:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:52.745 03:58:27 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:52.745 03:58:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:52.745 03:58:27 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:52.745 03:58:27 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:52.745 03:58:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:52.745 03:58:27 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:52.745 03:58:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:52.745 03:58:27 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:52.745 03:58:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:52.745 03:58:27 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:52.745 03:58:27 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:52.745 03:58:27 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:52.745 03:58:27 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:52.745 03:58:27 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:52.745 03:58:27 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:52.745 03:58:27 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:53.004 03:58:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:53.004 03:58:27 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:53.004 03:58:27 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:53.004 03:58:27 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:53.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:53.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:14:53.004 00:14:53.004 --- 10.0.0.2 ping statistics --- 00:14:53.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.004 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:14:53.004 03:58:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:53.004 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:53.004 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:14:53.004 00:14:53.004 --- 10.0.0.3 ping statistics --- 00:14:53.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.004 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:14:53.004 03:58:27 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:53.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:53.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:14:53.004 00:14:53.004 --- 10.0.0.1 ping statistics --- 00:14:53.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.004 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:14:53.004 03:58:27 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:53.004 03:58:27 -- nvmf/common.sh@421 -- # return 0 00:14:53.004 03:58:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:53.004 03:58:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:53.004 03:58:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:53.004 03:58:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:53.004 03:58:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:53.004 03:58:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:53.004 03:58:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:53.004 03:58:27 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:53.004 03:58:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:53.004 03:58:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:53.004 03:58:27 -- common/autotest_common.sh@10 -- # set +x 00:14:53.004 03:58:27 -- nvmf/common.sh@469 -- # nvmfpid=73021 00:14:53.004 03:58:27 -- nvmf/common.sh@470 -- # waitforlisten 73021 00:14:53.004 03:58:27 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:53.004 03:58:27 -- common/autotest_common.sh@829 -- # '[' -z 73021 ']' 00:14:53.004 03:58:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.004 03:58:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:53.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.004 03:58:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.004 03:58:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:53.004 03:58:27 -- common/autotest_common.sh@10 -- # set +x 00:14:53.004 [2024-11-08 03:58:27.987217] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:53.004 [2024-11-08 03:58:27.987305] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.263 [2024-11-08 03:58:28.126863] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.263 [2024-11-08 03:58:28.211438] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:53.263 [2024-11-08 03:58:28.211576] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.263 [2024-11-08 03:58:28.211588] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.263 [2024-11-08 03:58:28.211595] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.263 [2024-11-08 03:58:28.211626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.198 03:58:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:54.198 03:58:28 -- common/autotest_common.sh@862 -- # return 0 00:14:54.198 03:58:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:54.198 03:58:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:54.198 03:58:28 -- common/autotest_common.sh@10 -- # set +x 00:14:54.198 03:58:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:54.198 03:58:29 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:54.198 [2024-11-08 03:58:29.278485] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:54.198 03:58:29 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:54.198 03:58:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:54.198 03:58:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:54.198 03:58:29 -- common/autotest_common.sh@10 -- # set +x 00:14:54.457 ************************************ 00:14:54.457 START TEST lvs_grow_clean 00:14:54.457 ************************************ 00:14:54.457 03:58:29 -- common/autotest_common.sh@1114 -- # lvs_grow 00:14:54.457 03:58:29 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:54.457 03:58:29 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:54.457 03:58:29 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:54.457 03:58:29 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:54.457 03:58:29 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:54.457 03:58:29 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:54.457 03:58:29 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:54.457 03:58:29 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:54.457 03:58:29 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:54.457 03:58:29 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:54.457 03:58:29 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:54.715 03:58:29 -- target/nvmf_lvs_grow.sh@28 -- # lvs=cd1c0876-5a8c-48df-aa14-4e5d3e9d2d30 00:14:54.715 03:58:29 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cd1c0876-5a8c-48df-aa14-4e5d3e9d2d30 00:14:54.715 03:58:29 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:54.973 03:58:30 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:54.973 03:58:30 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:54.973 03:58:30 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u cd1c0876-5a8c-48df-aa14-4e5d3e9d2d30 lvol 150 00:14:55.233 03:58:30 -- target/nvmf_lvs_grow.sh@33 -- # lvol=006fa24c-de4b-4ff0-8b57-0ecdc0ec8985 00:14:55.233 03:58:30 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:55.233 03:58:30 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:55.491 [2024-11-08 03:58:30.536075] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:55.491 [2024-11-08 03:58:30.536133] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:55.492 true 00:14:55.492 03:58:30 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cd1c0876-5a8c-48df-aa14-4e5d3e9d2d30 00:14:55.492 03:58:30 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:55.750 03:58:30 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:55.750 03:58:30 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:56.009 03:58:30 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 006fa24c-de4b-4ff0-8b57-0ecdc0ec8985 00:14:56.267 03:58:31 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:56.267 [2024-11-08 03:58:31.360516] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:56.526 03:58:31 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:56.785 03:58:31 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:56.785 03:58:31 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73189 00:14:56.785 03:58:31 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:56.785 03:58:31 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73189 /var/tmp/bdevperf.sock 00:14:56.785 03:58:31 -- common/autotest_common.sh@829 -- # '[' -z 73189 ']' 00:14:56.785 03:58:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:56.785 03:58:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:56.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:56.785 03:58:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:56.785 03:58:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:56.785 03:58:31 -- common/autotest_common.sh@10 -- # set +x 00:14:56.785 [2024-11-08 03:58:31.689837] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:56.785 [2024-11-08 03:58:31.689961] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73189 ] 00:14:56.785 [2024-11-08 03:58:31.822297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.044 [2024-11-08 03:58:31.899427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.611 03:58:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:57.611 03:58:32 -- common/autotest_common.sh@862 -- # return 0 00:14:57.611 03:58:32 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:57.870 Nvme0n1 00:14:57.870 03:58:32 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:58.129 [ 00:14:58.129 { 00:14:58.129 "aliases": [ 00:14:58.129 "006fa24c-de4b-4ff0-8b57-0ecdc0ec8985" 00:14:58.129 ], 00:14:58.129 "assigned_rate_limits": { 00:14:58.129 "r_mbytes_per_sec": 0, 00:14:58.129 "rw_ios_per_sec": 0, 00:14:58.129 "rw_mbytes_per_sec": 0, 00:14:58.130 "w_mbytes_per_sec": 0 00:14:58.130 }, 00:14:58.130 "block_size": 4096, 00:14:58.130 "claimed": false, 00:14:58.130 "driver_specific": { 00:14:58.130 "mp_policy": "active_passive", 00:14:58.130 "nvme": [ 00:14:58.130 { 00:14:58.130 "ctrlr_data": { 00:14:58.130 "ana_reporting": false, 00:14:58.130 "cntlid": 1, 00:14:58.130 "firmware_revision": "24.01.1", 00:14:58.130 "model_number": "SPDK bdev Controller", 00:14:58.130 "multi_ctrlr": true, 00:14:58.130 "oacs": { 00:14:58.130 "firmware": 0, 00:14:58.130 "format": 0, 00:14:58.130 "ns_manage": 0, 00:14:58.130 "security": 0 00:14:58.130 }, 00:14:58.130 "serial_number": "SPDK0", 00:14:58.130 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:58.130 "vendor_id": "0x8086" 00:14:58.130 }, 00:14:58.130 "ns_data": { 00:14:58.130 "can_share": true, 00:14:58.130 "id": 1 00:14:58.130 }, 00:14:58.130 "trid": { 00:14:58.130 "adrfam": "IPv4", 00:14:58.130 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:58.130 "traddr": "10.0.0.2", 00:14:58.130 "trsvcid": "4420", 00:14:58.130 "trtype": "TCP" 00:14:58.130 }, 00:14:58.130 "vs": { 00:14:58.130 "nvme_version": "1.3" 00:14:58.130 } 00:14:58.130 } 00:14:58.130 ] 00:14:58.130 }, 00:14:58.130 "name": "Nvme0n1", 00:14:58.130 "num_blocks": 38912, 00:14:58.130 "product_name": "NVMe disk", 00:14:58.130 "supported_io_types": { 00:14:58.130 "abort": true, 00:14:58.130 "compare": true, 00:14:58.130 "compare_and_write": true, 00:14:58.130 "flush": true, 00:14:58.130 "nvme_admin": true, 00:14:58.130 "nvme_io": true, 00:14:58.130 "read": true, 00:14:58.130 "reset": true, 00:14:58.130 "unmap": true, 00:14:58.130 "write": true, 00:14:58.130 "write_zeroes": true 00:14:58.130 }, 00:14:58.130 "uuid": "006fa24c-de4b-4ff0-8b57-0ecdc0ec8985", 00:14:58.130 "zoned": false 00:14:58.130 } 00:14:58.130 ] 00:14:58.130 03:58:33 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73231 00:14:58.130 03:58:33 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:58.130 03:58:33 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:58.130 Running I/O for 10 seconds... 00:14:59.504 Latency(us) 00:14:59.504 [2024-11-08T03:58:34.615Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.504 [2024-11-08T03:58:34.615Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:59.504 Nvme0n1 : 1.00 9760.00 38.12 0.00 0.00 0.00 0.00 0.00 00:14:59.504 [2024-11-08T03:58:34.615Z] =================================================================================================================== 00:14:59.504 [2024-11-08T03:58:34.615Z] Total : 9760.00 38.12 0.00 0.00 0.00 0.00 0.00 00:14:59.504 00:15:00.070 03:58:35 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cd1c0876-5a8c-48df-aa14-4e5d3e9d2d30 00:15:00.329 [2024-11-08T03:58:35.440Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:00.329 Nvme0n1 : 2.00 9818.50 38.35 0.00 0.00 0.00 0.00 0.00 00:15:00.329 [2024-11-08T03:58:35.440Z] =================================================================================================================== 00:15:00.329 [2024-11-08T03:58:35.440Z] Total : 9818.50 38.35 0.00 0.00 0.00 0.00 0.00 00:15:00.329 00:15:00.587 true 00:15:00.587 03:58:35 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:00.587 03:58:35 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cd1c0876-5a8c-48df-aa14-4e5d3e9d2d30 00:15:00.845 03:58:35 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:00.845 03:58:35 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:00.845 03:58:35 -- target/nvmf_lvs_grow.sh@65 -- # wait 73231 00:15:01.411 [2024-11-08T03:58:36.522Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:01.411 Nvme0n1 : 3.00 9660.33 37.74 0.00 0.00 0.00 0.00 0.00 00:15:01.411 [2024-11-08T03:58:36.522Z] =================================================================================================================== 00:15:01.411 [2024-11-08T03:58:36.522Z] Total : 9660.33 37.74 0.00 0.00 0.00 0.00 0.00 00:15:01.411 00:15:02.369 [2024-11-08T03:58:37.480Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:02.369 Nvme0n1 : 4.00 9734.00 38.02 0.00 0.00 0.00 0.00 0.00 00:15:02.369 [2024-11-08T03:58:37.480Z] =================================================================================================================== 00:15:02.369 [2024-11-08T03:58:37.480Z] Total : 9734.00 38.02 0.00 0.00 0.00 0.00 0.00 00:15:02.369 00:15:03.305 [2024-11-08T03:58:38.416Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:03.305 Nvme0n1 : 5.00 9698.20 37.88 0.00 0.00 0.00 0.00 0.00 00:15:03.305 [2024-11-08T03:58:38.416Z] =================================================================================================================== 00:15:03.305 [2024-11-08T03:58:38.416Z] Total : 9698.20 37.88 0.00 0.00 0.00 0.00 0.00 00:15:03.305 00:15:04.241 [2024-11-08T03:58:39.352Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:04.241 Nvme0n1 : 6.00 9723.17 37.98 0.00 0.00 0.00 0.00 0.00 00:15:04.241 [2024-11-08T03:58:39.352Z] =================================================================================================================== 00:15:04.241 [2024-11-08T03:58:39.352Z] Total : 9723.17 37.98 0.00 0.00 0.00 0.00 0.00 00:15:04.241 00:15:05.179 [2024-11-08T03:58:40.290Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:05.179 Nvme0n1 : 7.00 9721.14 37.97 0.00 0.00 0.00 0.00 0.00 00:15:05.179 [2024-11-08T03:58:40.290Z] =================================================================================================================== 00:15:05.179 [2024-11-08T03:58:40.290Z] Total : 9721.14 37.97 0.00 0.00 0.00 0.00 0.00 00:15:05.179 00:15:06.554 [2024-11-08T03:58:41.665Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:06.554 Nvme0n1 : 8.00 9715.50 37.95 0.00 0.00 0.00 0.00 0.00 00:15:06.554 [2024-11-08T03:58:41.665Z] =================================================================================================================== 00:15:06.554 [2024-11-08T03:58:41.665Z] Total : 9715.50 37.95 0.00 0.00 0.00 0.00 0.00 00:15:06.554 00:15:07.121 [2024-11-08T03:58:42.232Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:07.121 Nvme0n1 : 9.00 9710.33 37.93 0.00 0.00 0.00 0.00 0.00 00:15:07.121 [2024-11-08T03:58:42.232Z] =================================================================================================================== 00:15:07.121 [2024-11-08T03:58:42.232Z] Total : 9710.33 37.93 0.00 0.00 0.00 0.00 0.00 00:15:07.121 00:15:08.496 [2024-11-08T03:58:43.607Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:08.496 Nvme0n1 : 10.00 9707.00 37.92 0.00 0.00 0.00 0.00 0.00 00:15:08.496 [2024-11-08T03:58:43.607Z] =================================================================================================================== 00:15:08.496 [2024-11-08T03:58:43.607Z] Total : 9707.00 37.92 0.00 0.00 0.00 0.00 0.00 00:15:08.496 00:15:08.496 00:15:08.496 Latency(us) 00:15:08.496 [2024-11-08T03:58:43.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.497 [2024-11-08T03:58:43.608Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:08.497 Nvme0n1 : 10.00 9715.02 37.95 0.00 0.00 13171.23 5838.66 65297.69 00:15:08.497 [2024-11-08T03:58:43.608Z] =================================================================================================================== 00:15:08.497 [2024-11-08T03:58:43.608Z] Total : 9715.02 37.95 0.00 0.00 13171.23 5838.66 65297.69 00:15:08.497 0 00:15:08.497 03:58:43 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73189 00:15:08.497 03:58:43 -- common/autotest_common.sh@936 -- # '[' -z 73189 ']' 00:15:08.497 03:58:43 -- common/autotest_common.sh@940 -- # kill -0 73189 00:15:08.497 03:58:43 -- common/autotest_common.sh@941 -- # uname 00:15:08.497 03:58:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:08.497 03:58:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73189 00:15:08.497 03:58:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:08.497 03:58:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:08.497 killing process with pid 73189 00:15:08.497 03:58:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73189' 00:15:08.497 03:58:43 -- common/autotest_common.sh@955 -- # kill 73189 00:15:08.497 Received shutdown signal, test time was about 10.000000 seconds 00:15:08.497 00:15:08.497 Latency(us) 00:15:08.497 [2024-11-08T03:58:43.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.497 [2024-11-08T03:58:43.608Z] =================================================================================================================== 00:15:08.497 [2024-11-08T03:58:43.608Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:08.497 03:58:43 -- common/autotest_common.sh@960 -- # wait 73189 00:15:08.497 03:58:43 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:08.755 03:58:43 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:15:08.755 03:58:43 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cd1c0876-5a8c-48df-aa14-4e5d3e9d2d30 00:15:09.013 03:58:44 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:15:09.013 03:58:44 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:15:09.013 03:58:44 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:09.270 [2024-11-08 03:58:44.268117] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:09.270 03:58:44 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cd1c0876-5a8c-48df-aa14-4e5d3e9d2d30 00:15:09.270 03:58:44 -- common/autotest_common.sh@650 -- # local es=0 00:15:09.270 03:58:44 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cd1c0876-5a8c-48df-aa14-4e5d3e9d2d30 00:15:09.270 03:58:44 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:09.270 03:58:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:09.270 03:58:44 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:09.270 03:58:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:09.270 03:58:44 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:09.270 03:58:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:09.270 03:58:44 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:09.271 03:58:44 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:09.271 03:58:44 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cd1c0876-5a8c-48df-aa14-4e5d3e9d2d30 00:15:09.529 2024/11/08 03:58:44 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:cd1c0876-5a8c-48df-aa14-4e5d3e9d2d30], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:15:09.529 request: 00:15:09.529 { 00:15:09.529 "method": "bdev_lvol_get_lvstores", 00:15:09.529 "params": { 00:15:09.529 "uuid": "cd1c0876-5a8c-48df-aa14-4e5d3e9d2d30" 00:15:09.529 } 00:15:09.529 } 00:15:09.529 Got JSON-RPC error response 00:15:09.529 GoRPCClient: error on JSON-RPC call 00:15:09.529 03:58:44 -- common/autotest_common.sh@653 -- # es=1 00:15:09.529 03:58:44 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:09.529 03:58:44 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:09.529 03:58:44 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:09.529 03:58:44 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:09.788 aio_bdev 00:15:09.788 03:58:44 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 006fa24c-de4b-4ff0-8b57-0ecdc0ec8985 00:15:09.788 03:58:44 -- common/autotest_common.sh@897 -- # local bdev_name=006fa24c-de4b-4ff0-8b57-0ecdc0ec8985 00:15:09.788 03:58:44 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:09.788 03:58:44 -- common/autotest_common.sh@899 -- # local i 00:15:09.788 03:58:44 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:09.788 03:58:44 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:09.788 03:58:44 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:10.046 03:58:45 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 006fa24c-de4b-4ff0-8b57-0ecdc0ec8985 -t 2000 00:15:10.305 [ 00:15:10.305 { 00:15:10.305 "aliases": [ 00:15:10.305 "lvs/lvol" 00:15:10.305 ], 00:15:10.305 "assigned_rate_limits": { 00:15:10.305 "r_mbytes_per_sec": 0, 00:15:10.305 "rw_ios_per_sec": 0, 00:15:10.305 "rw_mbytes_per_sec": 0, 00:15:10.305 "w_mbytes_per_sec": 0 00:15:10.305 }, 00:15:10.305 "block_size": 4096, 00:15:10.305 "claimed": false, 00:15:10.305 "driver_specific": { 00:15:10.305 "lvol": { 00:15:10.305 "base_bdev": "aio_bdev", 00:15:10.305 "clone": false, 00:15:10.305 "esnap_clone": false, 00:15:10.305 "lvol_store_uuid": "cd1c0876-5a8c-48df-aa14-4e5d3e9d2d30", 00:15:10.305 "snapshot": false, 00:15:10.305 "thin_provision": false 00:15:10.305 } 00:15:10.305 }, 00:15:10.305 "name": "006fa24c-de4b-4ff0-8b57-0ecdc0ec8985", 00:15:10.305 "num_blocks": 38912, 00:15:10.305 "product_name": "Logical Volume", 00:15:10.305 "supported_io_types": { 00:15:10.305 "abort": false, 00:15:10.305 "compare": false, 00:15:10.305 "compare_and_write": false, 00:15:10.305 "flush": false, 00:15:10.305 "nvme_admin": false, 00:15:10.305 "nvme_io": false, 00:15:10.305 "read": true, 00:15:10.305 "reset": true, 00:15:10.305 "unmap": true, 00:15:10.305 "write": true, 00:15:10.305 "write_zeroes": true 00:15:10.305 }, 00:15:10.305 "uuid": "006fa24c-de4b-4ff0-8b57-0ecdc0ec8985", 00:15:10.305 "zoned": false 00:15:10.305 } 00:15:10.305 ] 00:15:10.305 03:58:45 -- common/autotest_common.sh@905 -- # return 0 00:15:10.305 03:58:45 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:15:10.305 03:58:45 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cd1c0876-5a8c-48df-aa14-4e5d3e9d2d30 00:15:10.563 03:58:45 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:15:10.563 03:58:45 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:15:10.563 03:58:45 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cd1c0876-5a8c-48df-aa14-4e5d3e9d2d30 00:15:10.822 03:58:45 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:15:10.822 03:58:45 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 006fa24c-de4b-4ff0-8b57-0ecdc0ec8985 00:15:11.080 03:58:46 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cd1c0876-5a8c-48df-aa14-4e5d3e9d2d30 00:15:11.339 03:58:46 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:11.597 03:58:46 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:11.855 00:15:11.855 real 0m17.591s 00:15:11.855 user 0m16.819s 00:15:11.855 sys 0m2.163s 00:15:11.855 ************************************ 00:15:11.855 END TEST lvs_grow_clean 00:15:11.855 ************************************ 00:15:11.855 03:58:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:11.855 03:58:46 -- common/autotest_common.sh@10 -- # set +x 00:15:11.855 03:58:46 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:15:11.855 03:58:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:11.855 03:58:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:11.855 03:58:46 -- common/autotest_common.sh@10 -- # set +x 00:15:11.855 ************************************ 00:15:11.855 START TEST lvs_grow_dirty 00:15:11.855 ************************************ 00:15:11.855 03:58:46 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:15:11.855 03:58:46 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:11.855 03:58:46 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:11.855 03:58:46 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:11.855 03:58:46 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:11.855 03:58:46 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:11.856 03:58:46 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:11.856 03:58:46 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:11.856 03:58:46 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:11.856 03:58:46 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:12.451 03:58:47 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:12.451 03:58:47 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:12.709 03:58:47 -- target/nvmf_lvs_grow.sh@28 -- # lvs=8508334c-0ea6-4dbe-9709-d26dac97bd7a 00:15:12.709 03:58:47 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:12.709 03:58:47 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8508334c-0ea6-4dbe-9709-d26dac97bd7a 00:15:12.967 03:58:47 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:12.967 03:58:47 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:12.967 03:58:47 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8508334c-0ea6-4dbe-9709-d26dac97bd7a lvol 150 00:15:12.967 03:58:48 -- target/nvmf_lvs_grow.sh@33 -- # lvol=535646bc-5484-4166-afbe-dfc86c951cd3 00:15:12.967 03:58:48 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:12.967 03:58:48 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:13.225 [2024-11-08 03:58:48.291450] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:13.225 [2024-11-08 03:58:48.291512] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:13.225 true 00:15:13.225 03:58:48 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8508334c-0ea6-4dbe-9709-d26dac97bd7a 00:15:13.225 03:58:48 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:13.483 03:58:48 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:13.483 03:58:48 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:13.740 03:58:48 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 535646bc-5484-4166-afbe-dfc86c951cd3 00:15:13.998 03:58:49 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:14.256 03:58:49 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:14.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:14.574 03:58:49 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:14.574 03:58:49 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73619 00:15:14.574 03:58:49 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:14.575 03:58:49 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73619 /var/tmp/bdevperf.sock 00:15:14.575 03:58:49 -- common/autotest_common.sh@829 -- # '[' -z 73619 ']' 00:15:14.575 03:58:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:14.575 03:58:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:14.575 03:58:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:14.575 03:58:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:14.575 03:58:49 -- common/autotest_common.sh@10 -- # set +x 00:15:14.575 [2024-11-08 03:58:49.562337] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:14.575 [2024-11-08 03:58:49.562453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73619 ] 00:15:14.832 [2024-11-08 03:58:49.691400] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.832 [2024-11-08 03:58:49.772733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.398 03:58:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:15.398 03:58:50 -- common/autotest_common.sh@862 -- # return 0 00:15:15.398 03:58:50 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:15.658 Nvme0n1 00:15:15.658 03:58:50 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:15.916 [ 00:15:15.916 { 00:15:15.916 "aliases": [ 00:15:15.916 "535646bc-5484-4166-afbe-dfc86c951cd3" 00:15:15.916 ], 00:15:15.916 "assigned_rate_limits": { 00:15:15.916 "r_mbytes_per_sec": 0, 00:15:15.916 "rw_ios_per_sec": 0, 00:15:15.916 "rw_mbytes_per_sec": 0, 00:15:15.916 "w_mbytes_per_sec": 0 00:15:15.916 }, 00:15:15.916 "block_size": 4096, 00:15:15.916 "claimed": false, 00:15:15.916 "driver_specific": { 00:15:15.916 "mp_policy": "active_passive", 00:15:15.916 "nvme": [ 00:15:15.916 { 00:15:15.916 "ctrlr_data": { 00:15:15.916 "ana_reporting": false, 00:15:15.916 "cntlid": 1, 00:15:15.916 "firmware_revision": "24.01.1", 00:15:15.916 "model_number": "SPDK bdev Controller", 00:15:15.916 "multi_ctrlr": true, 00:15:15.916 "oacs": { 00:15:15.916 "firmware": 0, 00:15:15.916 "format": 0, 00:15:15.916 "ns_manage": 0, 00:15:15.916 "security": 0 00:15:15.916 }, 00:15:15.916 "serial_number": "SPDK0", 00:15:15.916 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:15.916 "vendor_id": "0x8086" 00:15:15.916 }, 00:15:15.916 "ns_data": { 00:15:15.916 "can_share": true, 00:15:15.916 "id": 1 00:15:15.916 }, 00:15:15.916 "trid": { 00:15:15.916 "adrfam": "IPv4", 00:15:15.916 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:15.916 "traddr": "10.0.0.2", 00:15:15.916 "trsvcid": "4420", 00:15:15.916 "trtype": "TCP" 00:15:15.916 }, 00:15:15.916 "vs": { 00:15:15.916 "nvme_version": "1.3" 00:15:15.916 } 00:15:15.916 } 00:15:15.916 ] 00:15:15.916 }, 00:15:15.916 "name": "Nvme0n1", 00:15:15.916 "num_blocks": 38912, 00:15:15.916 "product_name": "NVMe disk", 00:15:15.916 "supported_io_types": { 00:15:15.916 "abort": true, 00:15:15.916 "compare": true, 00:15:15.916 "compare_and_write": true, 00:15:15.916 "flush": true, 00:15:15.916 "nvme_admin": true, 00:15:15.916 "nvme_io": true, 00:15:15.916 "read": true, 00:15:15.917 "reset": true, 00:15:15.917 "unmap": true, 00:15:15.917 "write": true, 00:15:15.917 "write_zeroes": true 00:15:15.917 }, 00:15:15.917 "uuid": "535646bc-5484-4166-afbe-dfc86c951cd3", 00:15:15.917 "zoned": false 00:15:15.917 } 00:15:15.917 ] 00:15:15.917 03:58:50 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:15.917 03:58:50 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73666 00:15:15.917 03:58:50 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:16.175 Running I/O for 10 seconds... 00:15:17.111 Latency(us) 00:15:17.111 [2024-11-08T03:58:52.222Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.111 [2024-11-08T03:58:52.222Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:17.111 Nvme0n1 : 1.00 10245.00 40.02 0.00 0.00 0.00 0.00 0.00 00:15:17.111 [2024-11-08T03:58:52.222Z] =================================================================================================================== 00:15:17.112 [2024-11-08T03:58:52.223Z] Total : 10245.00 40.02 0.00 0.00 0.00 0.00 0.00 00:15:17.112 00:15:18.047 03:58:52 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8508334c-0ea6-4dbe-9709-d26dac97bd7a 00:15:18.047 [2024-11-08T03:58:53.158Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:18.047 Nvme0n1 : 2.00 10244.00 40.02 0.00 0.00 0.00 0.00 0.00 00:15:18.047 [2024-11-08T03:58:53.158Z] =================================================================================================================== 00:15:18.047 [2024-11-08T03:58:53.158Z] Total : 10244.00 40.02 0.00 0.00 0.00 0.00 0.00 00:15:18.047 00:15:18.306 true 00:15:18.306 03:58:53 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8508334c-0ea6-4dbe-9709-d26dac97bd7a 00:15:18.306 03:58:53 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:18.565 03:58:53 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:18.565 03:58:53 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:18.565 03:58:53 -- target/nvmf_lvs_grow.sh@65 -- # wait 73666 00:15:19.132 [2024-11-08T03:58:54.243Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:19.132 Nvme0n1 : 3.00 10212.67 39.89 0.00 0.00 0.00 0.00 0.00 00:15:19.132 [2024-11-08T03:58:54.243Z] =================================================================================================================== 00:15:19.132 [2024-11-08T03:58:54.243Z] Total : 10212.67 39.89 0.00 0.00 0.00 0.00 0.00 00:15:19.132 00:15:20.069 [2024-11-08T03:58:55.180Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:20.069 Nvme0n1 : 4.00 10171.25 39.73 0.00 0.00 0.00 0.00 0.00 00:15:20.069 [2024-11-08T03:58:55.180Z] =================================================================================================================== 00:15:20.069 [2024-11-08T03:58:55.180Z] Total : 10171.25 39.73 0.00 0.00 0.00 0.00 0.00 00:15:20.069 00:15:21.005 [2024-11-08T03:58:56.116Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:21.005 Nvme0n1 : 5.00 10120.60 39.53 0.00 0.00 0.00 0.00 0.00 00:15:21.005 [2024-11-08T03:58:56.116Z] =================================================================================================================== 00:15:21.005 [2024-11-08T03:58:56.116Z] Total : 10120.60 39.53 0.00 0.00 0.00 0.00 0.00 00:15:21.005 00:15:22.380 [2024-11-08T03:58:57.491Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:22.380 Nvme0n1 : 6.00 10100.50 39.46 0.00 0.00 0.00 0.00 0.00 00:15:22.380 [2024-11-08T03:58:57.491Z] =================================================================================================================== 00:15:22.380 [2024-11-08T03:58:57.491Z] Total : 10100.50 39.46 0.00 0.00 0.00 0.00 0.00 00:15:22.380 00:15:23.315 [2024-11-08T03:58:58.426Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:23.315 Nvme0n1 : 7.00 9658.86 37.73 0.00 0.00 0.00 0.00 0.00 00:15:23.315 [2024-11-08T03:58:58.426Z] =================================================================================================================== 00:15:23.315 [2024-11-08T03:58:58.426Z] Total : 9658.86 37.73 0.00 0.00 0.00 0.00 0.00 00:15:23.315 00:15:24.250 [2024-11-08T03:58:59.361Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:24.250 Nvme0n1 : 8.00 9542.88 37.28 0.00 0.00 0.00 0.00 0.00 00:15:24.250 [2024-11-08T03:58:59.361Z] =================================================================================================================== 00:15:24.250 [2024-11-08T03:58:59.361Z] Total : 9542.88 37.28 0.00 0.00 0.00 0.00 0.00 00:15:24.250 00:15:25.185 [2024-11-08T03:59:00.296Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:25.185 Nvme0n1 : 9.00 9439.56 36.87 0.00 0.00 0.00 0.00 0.00 00:15:25.185 [2024-11-08T03:59:00.296Z] =================================================================================================================== 00:15:25.185 [2024-11-08T03:59:00.296Z] Total : 9439.56 36.87 0.00 0.00 0.00 0.00 0.00 00:15:25.185 00:15:26.120 [2024-11-08T03:59:01.231Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:26.120 Nvme0n1 : 10.00 9361.10 36.57 0.00 0.00 0.00 0.00 0.00 00:15:26.120 [2024-11-08T03:59:01.231Z] =================================================================================================================== 00:15:26.120 [2024-11-08T03:59:01.231Z] Total : 9361.10 36.57 0.00 0.00 0.00 0.00 0.00 00:15:26.120 00:15:26.120 00:15:26.120 Latency(us) 00:15:26.120 [2024-11-08T03:59:01.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.120 [2024-11-08T03:59:01.231Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:26.120 Nvme0n1 : 10.00 9371.59 36.61 0.00 0.00 13653.74 4736.47 224967.21 00:15:26.120 [2024-11-08T03:59:01.231Z] =================================================================================================================== 00:15:26.120 [2024-11-08T03:59:01.231Z] Total : 9371.59 36.61 0.00 0.00 13653.74 4736.47 224967.21 00:15:26.120 0 00:15:26.120 03:59:01 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73619 00:15:26.120 03:59:01 -- common/autotest_common.sh@936 -- # '[' -z 73619 ']' 00:15:26.120 03:59:01 -- common/autotest_common.sh@940 -- # kill -0 73619 00:15:26.120 03:59:01 -- common/autotest_common.sh@941 -- # uname 00:15:26.120 03:59:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:26.120 03:59:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73619 00:15:26.120 killing process with pid 73619 00:15:26.120 Received shutdown signal, test time was about 10.000000 seconds 00:15:26.120 00:15:26.120 Latency(us) 00:15:26.120 [2024-11-08T03:59:01.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.120 [2024-11-08T03:59:01.231Z] =================================================================================================================== 00:15:26.120 [2024-11-08T03:59:01.231Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:26.120 03:59:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:26.120 03:59:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:26.120 03:59:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73619' 00:15:26.120 03:59:01 -- common/autotest_common.sh@955 -- # kill 73619 00:15:26.120 03:59:01 -- common/autotest_common.sh@960 -- # wait 73619 00:15:26.378 03:59:01 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:26.637 03:59:01 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8508334c-0ea6-4dbe-9709-d26dac97bd7a 00:15:26.637 03:59:01 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:15:26.896 03:59:01 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:15:26.896 03:59:01 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:15:26.896 03:59:01 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 73021 00:15:26.896 03:59:01 -- target/nvmf_lvs_grow.sh@74 -- # wait 73021 00:15:26.896 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 73021 Killed "${NVMF_APP[@]}" "$@" 00:15:26.896 03:59:01 -- target/nvmf_lvs_grow.sh@74 -- # true 00:15:26.896 03:59:01 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:15:26.896 03:59:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:26.896 03:59:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:26.896 03:59:01 -- common/autotest_common.sh@10 -- # set +x 00:15:26.896 03:59:01 -- nvmf/common.sh@469 -- # nvmfpid=73819 00:15:26.896 03:59:01 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:26.896 03:59:01 -- nvmf/common.sh@470 -- # waitforlisten 73819 00:15:26.896 03:59:01 -- common/autotest_common.sh@829 -- # '[' -z 73819 ']' 00:15:26.896 03:59:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.896 03:59:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:26.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.896 03:59:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.896 03:59:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:26.896 03:59:01 -- common/autotest_common.sh@10 -- # set +x 00:15:26.896 [2024-11-08 03:59:01.926551] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:26.896 [2024-11-08 03:59:01.926649] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:27.154 [2024-11-08 03:59:02.067174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.154 [2024-11-08 03:59:02.162518] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:27.154 [2024-11-08 03:59:02.162640] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:27.154 [2024-11-08 03:59:02.162652] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:27.154 [2024-11-08 03:59:02.162660] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:27.154 [2024-11-08 03:59:02.162685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.721 03:59:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:27.721 03:59:02 -- common/autotest_common.sh@862 -- # return 0 00:15:27.721 03:59:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:27.721 03:59:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:27.721 03:59:02 -- common/autotest_common.sh@10 -- # set +x 00:15:27.979 03:59:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:27.979 03:59:02 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:28.238 [2024-11-08 03:59:03.116823] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:28.238 [2024-11-08 03:59:03.117053] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:28.238 [2024-11-08 03:59:03.117247] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:28.238 03:59:03 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:15:28.238 03:59:03 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 535646bc-5484-4166-afbe-dfc86c951cd3 00:15:28.238 03:59:03 -- common/autotest_common.sh@897 -- # local bdev_name=535646bc-5484-4166-afbe-dfc86c951cd3 00:15:28.238 03:59:03 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:28.238 03:59:03 -- common/autotest_common.sh@899 -- # local i 00:15:28.238 03:59:03 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:28.238 03:59:03 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:28.238 03:59:03 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:28.496 03:59:03 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 535646bc-5484-4166-afbe-dfc86c951cd3 -t 2000 00:15:28.496 [ 00:15:28.496 { 00:15:28.496 "aliases": [ 00:15:28.496 "lvs/lvol" 00:15:28.496 ], 00:15:28.496 "assigned_rate_limits": { 00:15:28.496 "r_mbytes_per_sec": 0, 00:15:28.496 "rw_ios_per_sec": 0, 00:15:28.496 "rw_mbytes_per_sec": 0, 00:15:28.496 "w_mbytes_per_sec": 0 00:15:28.496 }, 00:15:28.496 "block_size": 4096, 00:15:28.496 "claimed": false, 00:15:28.496 "driver_specific": { 00:15:28.496 "lvol": { 00:15:28.496 "base_bdev": "aio_bdev", 00:15:28.496 "clone": false, 00:15:28.496 "esnap_clone": false, 00:15:28.496 "lvol_store_uuid": "8508334c-0ea6-4dbe-9709-d26dac97bd7a", 00:15:28.496 "snapshot": false, 00:15:28.496 "thin_provision": false 00:15:28.496 } 00:15:28.496 }, 00:15:28.496 "name": "535646bc-5484-4166-afbe-dfc86c951cd3", 00:15:28.496 "num_blocks": 38912, 00:15:28.496 "product_name": "Logical Volume", 00:15:28.496 "supported_io_types": { 00:15:28.496 "abort": false, 00:15:28.496 "compare": false, 00:15:28.496 "compare_and_write": false, 00:15:28.496 "flush": false, 00:15:28.496 "nvme_admin": false, 00:15:28.496 "nvme_io": false, 00:15:28.496 "read": true, 00:15:28.496 "reset": true, 00:15:28.496 "unmap": true, 00:15:28.496 "write": true, 00:15:28.496 "write_zeroes": true 00:15:28.496 }, 00:15:28.496 "uuid": "535646bc-5484-4166-afbe-dfc86c951cd3", 00:15:28.496 "zoned": false 00:15:28.496 } 00:15:28.496 ] 00:15:28.496 03:59:03 -- common/autotest_common.sh@905 -- # return 0 00:15:28.496 03:59:03 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:15:28.496 03:59:03 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8508334c-0ea6-4dbe-9709-d26dac97bd7a 00:15:28.755 03:59:03 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:15:28.755 03:59:03 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8508334c-0ea6-4dbe-9709-d26dac97bd7a 00:15:28.755 03:59:03 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:15:29.013 03:59:04 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:15:29.013 03:59:04 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:29.271 [2024-11-08 03:59:04.202408] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:29.271 03:59:04 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8508334c-0ea6-4dbe-9709-d26dac97bd7a 00:15:29.271 03:59:04 -- common/autotest_common.sh@650 -- # local es=0 00:15:29.271 03:59:04 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8508334c-0ea6-4dbe-9709-d26dac97bd7a 00:15:29.271 03:59:04 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:29.271 03:59:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:29.271 03:59:04 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:29.271 03:59:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:29.271 03:59:04 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:29.271 03:59:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:29.271 03:59:04 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:29.271 03:59:04 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:29.271 03:59:04 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8508334c-0ea6-4dbe-9709-d26dac97bd7a 00:15:29.530 2024/11/08 03:59:04 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:8508334c-0ea6-4dbe-9709-d26dac97bd7a], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:15:29.530 request: 00:15:29.530 { 00:15:29.530 "method": "bdev_lvol_get_lvstores", 00:15:29.530 "params": { 00:15:29.530 "uuid": "8508334c-0ea6-4dbe-9709-d26dac97bd7a" 00:15:29.530 } 00:15:29.530 } 00:15:29.530 Got JSON-RPC error response 00:15:29.530 GoRPCClient: error on JSON-RPC call 00:15:29.530 03:59:04 -- common/autotest_common.sh@653 -- # es=1 00:15:29.530 03:59:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:29.530 03:59:04 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:29.530 03:59:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:29.530 03:59:04 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:29.789 aio_bdev 00:15:29.789 03:59:04 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 535646bc-5484-4166-afbe-dfc86c951cd3 00:15:29.789 03:59:04 -- common/autotest_common.sh@897 -- # local bdev_name=535646bc-5484-4166-afbe-dfc86c951cd3 00:15:29.789 03:59:04 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:29.789 03:59:04 -- common/autotest_common.sh@899 -- # local i 00:15:29.789 03:59:04 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:29.789 03:59:04 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:29.789 03:59:04 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:30.049 03:59:04 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 535646bc-5484-4166-afbe-dfc86c951cd3 -t 2000 00:15:30.308 [ 00:15:30.308 { 00:15:30.308 "aliases": [ 00:15:30.308 "lvs/lvol" 00:15:30.308 ], 00:15:30.308 "assigned_rate_limits": { 00:15:30.308 "r_mbytes_per_sec": 0, 00:15:30.308 "rw_ios_per_sec": 0, 00:15:30.308 "rw_mbytes_per_sec": 0, 00:15:30.308 "w_mbytes_per_sec": 0 00:15:30.308 }, 00:15:30.308 "block_size": 4096, 00:15:30.308 "claimed": false, 00:15:30.308 "driver_specific": { 00:15:30.308 "lvol": { 00:15:30.308 "base_bdev": "aio_bdev", 00:15:30.308 "clone": false, 00:15:30.308 "esnap_clone": false, 00:15:30.308 "lvol_store_uuid": "8508334c-0ea6-4dbe-9709-d26dac97bd7a", 00:15:30.308 "snapshot": false, 00:15:30.308 "thin_provision": false 00:15:30.308 } 00:15:30.308 }, 00:15:30.308 "name": "535646bc-5484-4166-afbe-dfc86c951cd3", 00:15:30.308 "num_blocks": 38912, 00:15:30.308 "product_name": "Logical Volume", 00:15:30.308 "supported_io_types": { 00:15:30.308 "abort": false, 00:15:30.308 "compare": false, 00:15:30.308 "compare_and_write": false, 00:15:30.308 "flush": false, 00:15:30.308 "nvme_admin": false, 00:15:30.308 "nvme_io": false, 00:15:30.308 "read": true, 00:15:30.308 "reset": true, 00:15:30.308 "unmap": true, 00:15:30.308 "write": true, 00:15:30.308 "write_zeroes": true 00:15:30.308 }, 00:15:30.308 "uuid": "535646bc-5484-4166-afbe-dfc86c951cd3", 00:15:30.308 "zoned": false 00:15:30.308 } 00:15:30.308 ] 00:15:30.308 03:59:05 -- common/autotest_common.sh@905 -- # return 0 00:15:30.308 03:59:05 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8508334c-0ea6-4dbe-9709-d26dac97bd7a 00:15:30.308 03:59:05 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:15:30.566 03:59:05 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:15:30.566 03:59:05 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:15:30.566 03:59:05 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8508334c-0ea6-4dbe-9709-d26dac97bd7a 00:15:30.825 03:59:05 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:15:30.825 03:59:05 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 535646bc-5484-4166-afbe-dfc86c951cd3 00:15:30.825 03:59:05 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8508334c-0ea6-4dbe-9709-d26dac97bd7a 00:15:31.083 03:59:06 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:31.341 03:59:06 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:31.908 00:15:31.908 real 0m19.820s 00:15:31.908 user 0m40.271s 00:15:31.908 sys 0m8.075s 00:15:31.908 03:59:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:31.908 03:59:06 -- common/autotest_common.sh@10 -- # set +x 00:15:31.908 ************************************ 00:15:31.908 END TEST lvs_grow_dirty 00:15:31.908 ************************************ 00:15:31.908 03:59:06 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:31.908 03:59:06 -- common/autotest_common.sh@806 -- # type=--id 00:15:31.908 03:59:06 -- common/autotest_common.sh@807 -- # id=0 00:15:31.908 03:59:06 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:31.908 03:59:06 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:31.908 03:59:06 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:31.908 03:59:06 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:31.908 03:59:06 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:31.908 03:59:06 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:31.908 nvmf_trace.0 00:15:31.908 03:59:06 -- common/autotest_common.sh@821 -- # return 0 00:15:31.908 03:59:06 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:31.908 03:59:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:31.908 03:59:06 -- nvmf/common.sh@116 -- # sync 00:15:32.474 03:59:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:32.474 03:59:07 -- nvmf/common.sh@119 -- # set +e 00:15:32.474 03:59:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:32.474 03:59:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:32.474 rmmod nvme_tcp 00:15:32.474 rmmod nvme_fabrics 00:15:32.474 rmmod nvme_keyring 00:15:32.474 03:59:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:32.474 03:59:07 -- nvmf/common.sh@123 -- # set -e 00:15:32.474 03:59:07 -- nvmf/common.sh@124 -- # return 0 00:15:32.474 03:59:07 -- nvmf/common.sh@477 -- # '[' -n 73819 ']' 00:15:32.474 03:59:07 -- nvmf/common.sh@478 -- # killprocess 73819 00:15:32.474 03:59:07 -- common/autotest_common.sh@936 -- # '[' -z 73819 ']' 00:15:32.474 03:59:07 -- common/autotest_common.sh@940 -- # kill -0 73819 00:15:32.474 03:59:07 -- common/autotest_common.sh@941 -- # uname 00:15:32.474 03:59:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:32.474 03:59:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73819 00:15:32.474 03:59:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:32.474 03:59:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:32.474 killing process with pid 73819 00:15:32.474 03:59:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73819' 00:15:32.474 03:59:07 -- common/autotest_common.sh@955 -- # kill 73819 00:15:32.474 03:59:07 -- common/autotest_common.sh@960 -- # wait 73819 00:15:33.041 03:59:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:33.041 03:59:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:33.041 03:59:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:33.041 03:59:07 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:33.041 03:59:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:33.041 03:59:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.041 03:59:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:33.041 03:59:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.041 03:59:07 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:33.041 00:15:33.041 real 0m40.540s 00:15:33.041 user 1m3.728s 00:15:33.041 sys 0m11.403s 00:15:33.041 03:59:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:33.042 03:59:07 -- common/autotest_common.sh@10 -- # set +x 00:15:33.042 ************************************ 00:15:33.042 END TEST nvmf_lvs_grow 00:15:33.042 ************************************ 00:15:33.042 03:59:07 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:33.042 03:59:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:33.042 03:59:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:33.042 03:59:07 -- common/autotest_common.sh@10 -- # set +x 00:15:33.042 ************************************ 00:15:33.042 START TEST nvmf_bdev_io_wait 00:15:33.042 ************************************ 00:15:33.042 03:59:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:33.042 * Looking for test storage... 00:15:33.042 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:33.042 03:59:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:33.042 03:59:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:33.042 03:59:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:33.042 03:59:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:33.042 03:59:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:33.042 03:59:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:33.042 03:59:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:33.042 03:59:08 -- scripts/common.sh@335 -- # IFS=.-: 00:15:33.042 03:59:08 -- scripts/common.sh@335 -- # read -ra ver1 00:15:33.042 03:59:08 -- scripts/common.sh@336 -- # IFS=.-: 00:15:33.042 03:59:08 -- scripts/common.sh@336 -- # read -ra ver2 00:15:33.042 03:59:08 -- scripts/common.sh@337 -- # local 'op=<' 00:15:33.042 03:59:08 -- scripts/common.sh@339 -- # ver1_l=2 00:15:33.042 03:59:08 -- scripts/common.sh@340 -- # ver2_l=1 00:15:33.042 03:59:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:33.042 03:59:08 -- scripts/common.sh@343 -- # case "$op" in 00:15:33.042 03:59:08 -- scripts/common.sh@344 -- # : 1 00:15:33.042 03:59:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:33.042 03:59:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:33.042 03:59:08 -- scripts/common.sh@364 -- # decimal 1 00:15:33.042 03:59:08 -- scripts/common.sh@352 -- # local d=1 00:15:33.042 03:59:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:33.042 03:59:08 -- scripts/common.sh@354 -- # echo 1 00:15:33.042 03:59:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:33.042 03:59:08 -- scripts/common.sh@365 -- # decimal 2 00:15:33.042 03:59:08 -- scripts/common.sh@352 -- # local d=2 00:15:33.042 03:59:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:33.042 03:59:08 -- scripts/common.sh@354 -- # echo 2 00:15:33.042 03:59:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:33.042 03:59:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:33.042 03:59:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:33.042 03:59:08 -- scripts/common.sh@367 -- # return 0 00:15:33.042 03:59:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:33.042 03:59:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:33.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.042 --rc genhtml_branch_coverage=1 00:15:33.042 --rc genhtml_function_coverage=1 00:15:33.042 --rc genhtml_legend=1 00:15:33.042 --rc geninfo_all_blocks=1 00:15:33.042 --rc geninfo_unexecuted_blocks=1 00:15:33.042 00:15:33.042 ' 00:15:33.042 03:59:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:33.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.042 --rc genhtml_branch_coverage=1 00:15:33.042 --rc genhtml_function_coverage=1 00:15:33.042 --rc genhtml_legend=1 00:15:33.042 --rc geninfo_all_blocks=1 00:15:33.042 --rc geninfo_unexecuted_blocks=1 00:15:33.042 00:15:33.042 ' 00:15:33.042 03:59:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:33.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.042 --rc genhtml_branch_coverage=1 00:15:33.042 --rc genhtml_function_coverage=1 00:15:33.042 --rc genhtml_legend=1 00:15:33.042 --rc geninfo_all_blocks=1 00:15:33.042 --rc geninfo_unexecuted_blocks=1 00:15:33.042 00:15:33.042 ' 00:15:33.042 03:59:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:33.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.042 --rc genhtml_branch_coverage=1 00:15:33.042 --rc genhtml_function_coverage=1 00:15:33.042 --rc genhtml_legend=1 00:15:33.042 --rc geninfo_all_blocks=1 00:15:33.042 --rc geninfo_unexecuted_blocks=1 00:15:33.042 00:15:33.042 ' 00:15:33.042 03:59:08 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:33.042 03:59:08 -- nvmf/common.sh@7 -- # uname -s 00:15:33.301 03:59:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:33.301 03:59:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:33.301 03:59:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:33.301 03:59:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:33.301 03:59:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:33.301 03:59:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:33.301 03:59:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:33.301 03:59:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:33.301 03:59:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:33.301 03:59:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:33.301 03:59:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:15:33.301 03:59:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:15:33.301 03:59:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:33.301 03:59:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:33.301 03:59:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:33.301 03:59:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:33.301 03:59:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:33.301 03:59:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:33.301 03:59:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:33.301 03:59:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.301 03:59:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.301 03:59:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.301 03:59:08 -- paths/export.sh@5 -- # export PATH 00:15:33.301 03:59:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.301 03:59:08 -- nvmf/common.sh@46 -- # : 0 00:15:33.301 03:59:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:33.301 03:59:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:33.301 03:59:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:33.301 03:59:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:33.301 03:59:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:33.301 03:59:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:33.301 03:59:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:33.301 03:59:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:33.301 03:59:08 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:33.301 03:59:08 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:33.301 03:59:08 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:33.301 03:59:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:33.301 03:59:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:33.302 03:59:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:33.302 03:59:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:33.302 03:59:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:33.302 03:59:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.302 03:59:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:33.302 03:59:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.302 03:59:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:33.302 03:59:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:33.302 03:59:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:33.302 03:59:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:33.302 03:59:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:33.302 03:59:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:33.302 03:59:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:33.302 03:59:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:33.302 03:59:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:33.302 03:59:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:33.302 03:59:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:33.302 03:59:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:33.302 03:59:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:33.302 03:59:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:33.302 03:59:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:33.302 03:59:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:33.302 03:59:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:33.302 03:59:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:33.302 03:59:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:33.302 03:59:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:33.302 Cannot find device "nvmf_tgt_br" 00:15:33.302 03:59:08 -- nvmf/common.sh@154 -- # true 00:15:33.302 03:59:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:33.302 Cannot find device "nvmf_tgt_br2" 00:15:33.302 03:59:08 -- nvmf/common.sh@155 -- # true 00:15:33.302 03:59:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:33.302 03:59:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:33.302 Cannot find device "nvmf_tgt_br" 00:15:33.302 03:59:08 -- nvmf/common.sh@157 -- # true 00:15:33.302 03:59:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:33.302 Cannot find device "nvmf_tgt_br2" 00:15:33.302 03:59:08 -- nvmf/common.sh@158 -- # true 00:15:33.302 03:59:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:33.302 03:59:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:33.302 03:59:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:33.302 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:33.302 03:59:08 -- nvmf/common.sh@161 -- # true 00:15:33.302 03:59:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:33.302 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:33.302 03:59:08 -- nvmf/common.sh@162 -- # true 00:15:33.302 03:59:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:33.302 03:59:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:33.302 03:59:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:33.302 03:59:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:33.302 03:59:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:33.302 03:59:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:33.302 03:59:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:33.302 03:59:08 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:33.302 03:59:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:33.302 03:59:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:33.302 03:59:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:33.302 03:59:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:33.302 03:59:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:33.302 03:59:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:33.302 03:59:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:33.302 03:59:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:33.561 03:59:08 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:33.561 03:59:08 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:33.561 03:59:08 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:33.561 03:59:08 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:33.561 03:59:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:33.561 03:59:08 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:33.561 03:59:08 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:33.561 03:59:08 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:33.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:33.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:15:33.561 00:15:33.561 --- 10.0.0.2 ping statistics --- 00:15:33.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.561 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:15:33.561 03:59:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:33.561 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:33.561 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:15:33.561 00:15:33.561 --- 10.0.0.3 ping statistics --- 00:15:33.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.561 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:15:33.561 03:59:08 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:33.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:33.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:15:33.561 00:15:33.561 --- 10.0.0.1 ping statistics --- 00:15:33.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.561 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:15:33.561 03:59:08 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:33.561 03:59:08 -- nvmf/common.sh@421 -- # return 0 00:15:33.561 03:59:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:33.561 03:59:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:33.561 03:59:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:33.561 03:59:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:33.561 03:59:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:33.561 03:59:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:33.561 03:59:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:33.561 03:59:08 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:33.561 03:59:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:33.561 03:59:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:33.561 03:59:08 -- common/autotest_common.sh@10 -- # set +x 00:15:33.561 03:59:08 -- nvmf/common.sh@469 -- # nvmfpid=74250 00:15:33.561 03:59:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:33.561 03:59:08 -- nvmf/common.sh@470 -- # waitforlisten 74250 00:15:33.561 03:59:08 -- common/autotest_common.sh@829 -- # '[' -z 74250 ']' 00:15:33.561 03:59:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.561 03:59:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:33.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.561 03:59:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.561 03:59:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:33.561 03:59:08 -- common/autotest_common.sh@10 -- # set +x 00:15:33.561 [2024-11-08 03:59:08.587919] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:33.561 [2024-11-08 03:59:08.588010] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.820 [2024-11-08 03:59:08.730652] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:33.820 [2024-11-08 03:59:08.840474] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:33.820 [2024-11-08 03:59:08.840658] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.820 [2024-11-08 03:59:08.840677] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.820 [2024-11-08 03:59:08.840689] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.820 [2024-11-08 03:59:08.840855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.820 [2024-11-08 03:59:08.841007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:33.820 [2024-11-08 03:59:08.841595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:33.820 [2024-11-08 03:59:08.841603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.757 03:59:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:34.758 03:59:09 -- common/autotest_common.sh@862 -- # return 0 00:15:34.758 03:59:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:34.758 03:59:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:34.758 03:59:09 -- common/autotest_common.sh@10 -- # set +x 00:15:34.758 03:59:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:34.758 03:59:09 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:34.758 03:59:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.758 03:59:09 -- common/autotest_common.sh@10 -- # set +x 00:15:34.758 03:59:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.758 03:59:09 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:34.758 03:59:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.758 03:59:09 -- common/autotest_common.sh@10 -- # set +x 00:15:34.758 03:59:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.758 03:59:09 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:34.758 03:59:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.758 03:59:09 -- common/autotest_common.sh@10 -- # set +x 00:15:34.758 [2024-11-08 03:59:09.730128] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:34.758 03:59:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.758 03:59:09 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:34.758 03:59:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.758 03:59:09 -- common/autotest_common.sh@10 -- # set +x 00:15:34.758 Malloc0 00:15:34.758 03:59:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.758 03:59:09 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:34.758 03:59:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.758 03:59:09 -- common/autotest_common.sh@10 -- # set +x 00:15:34.758 03:59:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.758 03:59:09 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:34.758 03:59:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.758 03:59:09 -- common/autotest_common.sh@10 -- # set +x 00:15:34.758 03:59:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.758 03:59:09 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:34.758 03:59:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.758 03:59:09 -- common/autotest_common.sh@10 -- # set +x 00:15:34.758 [2024-11-08 03:59:09.786924] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:34.758 03:59:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.758 03:59:09 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=74303 00:15:34.758 03:59:09 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:34.758 03:59:09 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:34.758 03:59:09 -- nvmf/common.sh@520 -- # config=() 00:15:34.758 03:59:09 -- target/bdev_io_wait.sh@30 -- # READ_PID=74305 00:15:34.758 03:59:09 -- nvmf/common.sh@520 -- # local subsystem config 00:15:34.758 03:59:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:34.758 03:59:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:34.758 { 00:15:34.758 "params": { 00:15:34.758 "name": "Nvme$subsystem", 00:15:34.758 "trtype": "$TEST_TRANSPORT", 00:15:34.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:34.758 "adrfam": "ipv4", 00:15:34.758 "trsvcid": "$NVMF_PORT", 00:15:34.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:34.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:34.758 "hdgst": ${hdgst:-false}, 00:15:34.758 "ddgst": ${ddgst:-false} 00:15:34.758 }, 00:15:34.758 "method": "bdev_nvme_attach_controller" 00:15:34.758 } 00:15:34.758 EOF 00:15:34.758 )") 00:15:34.758 03:59:09 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:34.758 03:59:09 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:34.758 03:59:09 -- nvmf/common.sh@520 -- # config=() 00:15:34.758 03:59:09 -- nvmf/common.sh@520 -- # local subsystem config 00:15:34.758 03:59:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:34.758 03:59:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:34.758 { 00:15:34.758 "params": { 00:15:34.758 "name": "Nvme$subsystem", 00:15:34.758 "trtype": "$TEST_TRANSPORT", 00:15:34.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:34.758 "adrfam": "ipv4", 00:15:34.758 "trsvcid": "$NVMF_PORT", 00:15:34.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:34.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:34.758 "hdgst": ${hdgst:-false}, 00:15:34.758 "ddgst": ${ddgst:-false} 00:15:34.758 }, 00:15:34.758 "method": "bdev_nvme_attach_controller" 00:15:34.758 } 00:15:34.758 EOF 00:15:34.758 )") 00:15:34.758 03:59:09 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=74307 00:15:34.758 03:59:09 -- nvmf/common.sh@542 -- # cat 00:15:34.758 03:59:09 -- nvmf/common.sh@542 -- # cat 00:15:34.758 03:59:09 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=74310 00:15:34.758 03:59:09 -- target/bdev_io_wait.sh@35 -- # sync 00:15:34.758 03:59:09 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:34.758 03:59:09 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:34.758 03:59:09 -- nvmf/common.sh@544 -- # jq . 00:15:34.758 03:59:09 -- nvmf/common.sh@544 -- # jq . 00:15:34.758 03:59:09 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:34.758 03:59:09 -- nvmf/common.sh@520 -- # config=() 00:15:34.758 03:59:09 -- nvmf/common.sh@520 -- # local subsystem config 00:15:34.758 03:59:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:34.758 03:59:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:34.758 { 00:15:34.758 "params": { 00:15:34.758 "name": "Nvme$subsystem", 00:15:34.758 "trtype": "$TEST_TRANSPORT", 00:15:34.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:34.758 "adrfam": "ipv4", 00:15:34.758 "trsvcid": "$NVMF_PORT", 00:15:34.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:34.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:34.758 "hdgst": ${hdgst:-false}, 00:15:34.758 "ddgst": ${ddgst:-false} 00:15:34.758 }, 00:15:34.758 "method": "bdev_nvme_attach_controller" 00:15:34.758 } 00:15:34.758 EOF 00:15:34.758 )") 00:15:34.758 03:59:09 -- nvmf/common.sh@545 -- # IFS=, 00:15:34.758 03:59:09 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:34.758 "params": { 00:15:34.758 "name": "Nvme1", 00:15:34.758 "trtype": "tcp", 00:15:34.758 "traddr": "10.0.0.2", 00:15:34.758 "adrfam": "ipv4", 00:15:34.758 "trsvcid": "4420", 00:15:34.758 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:34.758 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:34.758 "hdgst": false, 00:15:34.758 "ddgst": false 00:15:34.758 }, 00:15:34.758 "method": "bdev_nvme_attach_controller" 00:15:34.758 }' 00:15:34.758 03:59:09 -- nvmf/common.sh@542 -- # cat 00:15:34.758 03:59:09 -- nvmf/common.sh@545 -- # IFS=, 00:15:34.758 03:59:09 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:34.758 "params": { 00:15:34.758 "name": "Nvme1", 00:15:34.758 "trtype": "tcp", 00:15:34.758 "traddr": "10.0.0.2", 00:15:34.758 "adrfam": "ipv4", 00:15:34.758 "trsvcid": "4420", 00:15:34.758 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:34.758 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:34.758 "hdgst": false, 00:15:34.758 "ddgst": false 00:15:34.758 }, 00:15:34.758 "method": "bdev_nvme_attach_controller" 00:15:34.758 }' 00:15:34.758 03:59:09 -- nvmf/common.sh@544 -- # jq . 00:15:34.758 03:59:09 -- nvmf/common.sh@545 -- # IFS=, 00:15:34.758 03:59:09 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:34.758 "params": { 00:15:34.758 "name": "Nvme1", 00:15:34.758 "trtype": "tcp", 00:15:34.758 "traddr": "10.0.0.2", 00:15:34.758 "adrfam": "ipv4", 00:15:34.758 "trsvcid": "4420", 00:15:34.758 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:34.758 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:34.758 "hdgst": false, 00:15:34.758 "ddgst": false 00:15:34.758 }, 00:15:34.758 "method": "bdev_nvme_attach_controller" 00:15:34.758 }' 00:15:34.758 03:59:09 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:34.758 03:59:09 -- nvmf/common.sh@520 -- # config=() 00:15:34.758 03:59:09 -- nvmf/common.sh@520 -- # local subsystem config 00:15:34.758 03:59:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:34.758 03:59:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:34.758 { 00:15:34.758 "params": { 00:15:34.758 "name": "Nvme$subsystem", 00:15:34.758 "trtype": "$TEST_TRANSPORT", 00:15:34.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:34.758 "adrfam": "ipv4", 00:15:34.758 "trsvcid": "$NVMF_PORT", 00:15:34.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:34.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:34.758 "hdgst": ${hdgst:-false}, 00:15:34.758 "ddgst": ${ddgst:-false} 00:15:34.758 }, 00:15:34.758 "method": "bdev_nvme_attach_controller" 00:15:34.758 } 00:15:34.758 EOF 00:15:34.758 )") 00:15:34.758 03:59:09 -- nvmf/common.sh@542 -- # cat 00:15:34.758 03:59:09 -- nvmf/common.sh@544 -- # jq . 00:15:34.758 03:59:09 -- nvmf/common.sh@545 -- # IFS=, 00:15:34.758 03:59:09 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:34.758 "params": { 00:15:34.758 "name": "Nvme1", 00:15:34.758 "trtype": "tcp", 00:15:34.758 "traddr": "10.0.0.2", 00:15:34.758 "adrfam": "ipv4", 00:15:34.758 "trsvcid": "4420", 00:15:34.758 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:34.758 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:34.758 "hdgst": false, 00:15:34.758 "ddgst": false 00:15:34.758 }, 00:15:34.758 "method": "bdev_nvme_attach_controller" 00:15:34.758 }' 00:15:34.758 [2024-11-08 03:59:09.850861] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:34.758 [2024-11-08 03:59:09.850948] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:35.017 [2024-11-08 03:59:09.870454] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:35.017 [2024-11-08 03:59:09.870531] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:35.017 [2024-11-08 03:59:09.870844] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:35.017 [2024-11-08 03:59:09.871054] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:35.017 03:59:09 -- target/bdev_io_wait.sh@37 -- # wait 74303 00:15:35.017 [2024-11-08 03:59:09.885125] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:35.017 [2024-11-08 03:59:09.885227] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:35.017 [2024-11-08 03:59:10.109855] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.275 [2024-11-08 03:59:10.139024] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.276 [2024-11-08 03:59:10.222797] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.276 [2024-11-08 03:59:10.230256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:35.276 [2024-11-08 03:59:10.241306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:15:35.276 [2024-11-08 03:59:10.295877] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.276 [2024-11-08 03:59:10.313182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:35.534 [2024-11-08 03:59:10.399325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:35.534 Running I/O for 1 seconds... 00:15:35.534 Running I/O for 1 seconds... 00:15:35.534 Running I/O for 1 seconds... 00:15:35.534 Running I/O for 1 seconds... 00:15:36.470 00:15:36.470 Latency(us) 00:15:36.470 [2024-11-08T03:59:11.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.470 [2024-11-08T03:59:11.581Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:36.470 Nvme1n1 : 1.01 6861.30 26.80 0.00 0.00 18544.16 9234.62 22043.93 00:15:36.470 [2024-11-08T03:59:11.581Z] =================================================================================================================== 00:15:36.470 [2024-11-08T03:59:11.581Z] Total : 6861.30 26.80 0.00 0.00 18544.16 9234.62 22043.93 00:15:36.470 00:15:36.470 Latency(us) 00:15:36.470 [2024-11-08T03:59:11.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.470 [2024-11-08T03:59:11.581Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:36.470 Nvme1n1 : 1.01 7078.47 27.65 0.00 0.00 18005.26 7626.01 28120.90 00:15:36.470 [2024-11-08T03:59:11.581Z] =================================================================================================================== 00:15:36.470 [2024-11-08T03:59:11.581Z] Total : 7078.47 27.65 0.00 0.00 18005.26 7626.01 28120.90 00:15:36.470 00:15:36.470 Latency(us) 00:15:36.470 [2024-11-08T03:59:11.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.470 [2024-11-08T03:59:11.581Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:36.470 Nvme1n1 : 1.01 6772.39 26.45 0.00 0.00 18828.17 6702.55 41466.41 00:15:36.470 [2024-11-08T03:59:11.581Z] =================================================================================================================== 00:15:36.470 [2024-11-08T03:59:11.581Z] Total : 6772.39 26.45 0.00 0.00 18828.17 6702.55 41466.41 00:15:36.470 00:15:36.470 Latency(us) 00:15:36.470 [2024-11-08T03:59:11.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.470 [2024-11-08T03:59:11.581Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:36.470 Nvme1n1 : 1.00 227090.46 887.07 0.00 0.00 561.57 226.21 748.45 00:15:36.470 [2024-11-08T03:59:11.581Z] =================================================================================================================== 00:15:36.470 [2024-11-08T03:59:11.581Z] Total : 227090.46 887.07 0.00 0.00 561.57 226.21 748.45 00:15:36.729 03:59:11 -- target/bdev_io_wait.sh@38 -- # wait 74305 00:15:36.729 03:59:11 -- target/bdev_io_wait.sh@39 -- # wait 74307 00:15:36.988 03:59:11 -- target/bdev_io_wait.sh@40 -- # wait 74310 00:15:36.988 03:59:11 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:36.988 03:59:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.988 03:59:11 -- common/autotest_common.sh@10 -- # set +x 00:15:36.988 03:59:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.988 03:59:11 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:36.988 03:59:11 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:36.988 03:59:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:36.988 03:59:11 -- nvmf/common.sh@116 -- # sync 00:15:36.988 03:59:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:36.988 03:59:11 -- nvmf/common.sh@119 -- # set +e 00:15:36.988 03:59:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:36.988 03:59:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:36.988 rmmod nvme_tcp 00:15:36.988 rmmod nvme_fabrics 00:15:36.988 rmmod nvme_keyring 00:15:36.988 03:59:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:36.988 03:59:11 -- nvmf/common.sh@123 -- # set -e 00:15:36.988 03:59:11 -- nvmf/common.sh@124 -- # return 0 00:15:36.988 03:59:11 -- nvmf/common.sh@477 -- # '[' -n 74250 ']' 00:15:36.988 03:59:11 -- nvmf/common.sh@478 -- # killprocess 74250 00:15:36.988 03:59:11 -- common/autotest_common.sh@936 -- # '[' -z 74250 ']' 00:15:36.988 03:59:11 -- common/autotest_common.sh@940 -- # kill -0 74250 00:15:36.988 03:59:11 -- common/autotest_common.sh@941 -- # uname 00:15:36.988 03:59:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:36.988 03:59:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74250 00:15:36.988 03:59:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:36.988 03:59:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:36.988 killing process with pid 74250 00:15:36.988 03:59:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74250' 00:15:36.988 03:59:12 -- common/autotest_common.sh@955 -- # kill 74250 00:15:36.988 03:59:12 -- common/autotest_common.sh@960 -- # wait 74250 00:15:37.247 03:59:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:37.247 03:59:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:37.247 03:59:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:37.247 03:59:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:37.247 03:59:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:37.247 03:59:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.247 03:59:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:37.247 03:59:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.247 03:59:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:37.247 00:15:37.247 real 0m4.373s 00:15:37.247 user 0m19.066s 00:15:37.247 sys 0m2.081s 00:15:37.247 03:59:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:37.247 03:59:12 -- common/autotest_common.sh@10 -- # set +x 00:15:37.247 ************************************ 00:15:37.247 END TEST nvmf_bdev_io_wait 00:15:37.247 ************************************ 00:15:37.506 03:59:12 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:37.506 03:59:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:37.506 03:59:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:37.506 03:59:12 -- common/autotest_common.sh@10 -- # set +x 00:15:37.506 ************************************ 00:15:37.506 START TEST nvmf_queue_depth 00:15:37.506 ************************************ 00:15:37.506 03:59:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:37.506 * Looking for test storage... 00:15:37.506 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:37.506 03:59:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:37.506 03:59:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:37.506 03:59:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:37.506 03:59:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:37.506 03:59:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:37.506 03:59:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:37.506 03:59:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:37.506 03:59:12 -- scripts/common.sh@335 -- # IFS=.-: 00:15:37.506 03:59:12 -- scripts/common.sh@335 -- # read -ra ver1 00:15:37.506 03:59:12 -- scripts/common.sh@336 -- # IFS=.-: 00:15:37.506 03:59:12 -- scripts/common.sh@336 -- # read -ra ver2 00:15:37.506 03:59:12 -- scripts/common.sh@337 -- # local 'op=<' 00:15:37.506 03:59:12 -- scripts/common.sh@339 -- # ver1_l=2 00:15:37.506 03:59:12 -- scripts/common.sh@340 -- # ver2_l=1 00:15:37.506 03:59:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:37.506 03:59:12 -- scripts/common.sh@343 -- # case "$op" in 00:15:37.506 03:59:12 -- scripts/common.sh@344 -- # : 1 00:15:37.506 03:59:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:37.506 03:59:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:37.506 03:59:12 -- scripts/common.sh@364 -- # decimal 1 00:15:37.506 03:59:12 -- scripts/common.sh@352 -- # local d=1 00:15:37.506 03:59:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:37.506 03:59:12 -- scripts/common.sh@354 -- # echo 1 00:15:37.506 03:59:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:37.506 03:59:12 -- scripts/common.sh@365 -- # decimal 2 00:15:37.506 03:59:12 -- scripts/common.sh@352 -- # local d=2 00:15:37.506 03:59:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:37.507 03:59:12 -- scripts/common.sh@354 -- # echo 2 00:15:37.507 03:59:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:37.507 03:59:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:37.507 03:59:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:37.507 03:59:12 -- scripts/common.sh@367 -- # return 0 00:15:37.507 03:59:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:37.507 03:59:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:37.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.507 --rc genhtml_branch_coverage=1 00:15:37.507 --rc genhtml_function_coverage=1 00:15:37.507 --rc genhtml_legend=1 00:15:37.507 --rc geninfo_all_blocks=1 00:15:37.507 --rc geninfo_unexecuted_blocks=1 00:15:37.507 00:15:37.507 ' 00:15:37.507 03:59:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:37.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.507 --rc genhtml_branch_coverage=1 00:15:37.507 --rc genhtml_function_coverage=1 00:15:37.507 --rc genhtml_legend=1 00:15:37.507 --rc geninfo_all_blocks=1 00:15:37.507 --rc geninfo_unexecuted_blocks=1 00:15:37.507 00:15:37.507 ' 00:15:37.507 03:59:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:37.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.507 --rc genhtml_branch_coverage=1 00:15:37.507 --rc genhtml_function_coverage=1 00:15:37.507 --rc genhtml_legend=1 00:15:37.507 --rc geninfo_all_blocks=1 00:15:37.507 --rc geninfo_unexecuted_blocks=1 00:15:37.507 00:15:37.507 ' 00:15:37.507 03:59:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:37.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.507 --rc genhtml_branch_coverage=1 00:15:37.507 --rc genhtml_function_coverage=1 00:15:37.507 --rc genhtml_legend=1 00:15:37.507 --rc geninfo_all_blocks=1 00:15:37.507 --rc geninfo_unexecuted_blocks=1 00:15:37.507 00:15:37.507 ' 00:15:37.507 03:59:12 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:37.507 03:59:12 -- nvmf/common.sh@7 -- # uname -s 00:15:37.507 03:59:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:37.507 03:59:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:37.507 03:59:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:37.507 03:59:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:37.507 03:59:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:37.507 03:59:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:37.507 03:59:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:37.507 03:59:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:37.507 03:59:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:37.507 03:59:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:37.507 03:59:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:15:37.507 03:59:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:15:37.507 03:59:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:37.507 03:59:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:37.507 03:59:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:37.507 03:59:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:37.507 03:59:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:37.507 03:59:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:37.507 03:59:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:37.507 03:59:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.507 03:59:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.507 03:59:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.507 03:59:12 -- paths/export.sh@5 -- # export PATH 00:15:37.507 03:59:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.507 03:59:12 -- nvmf/common.sh@46 -- # : 0 00:15:37.507 03:59:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:37.507 03:59:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:37.507 03:59:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:37.507 03:59:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:37.507 03:59:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:37.507 03:59:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:37.507 03:59:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:37.507 03:59:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:37.507 03:59:12 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:37.507 03:59:12 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:37.507 03:59:12 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:37.507 03:59:12 -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:37.507 03:59:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:37.507 03:59:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:37.507 03:59:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:37.507 03:59:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:37.507 03:59:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:37.507 03:59:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.507 03:59:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:37.507 03:59:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.507 03:59:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:37.507 03:59:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:37.507 03:59:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:37.507 03:59:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:37.507 03:59:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:37.507 03:59:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:37.507 03:59:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:37.507 03:59:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:37.507 03:59:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:37.507 03:59:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:37.507 03:59:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:37.507 03:59:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:37.507 03:59:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:37.507 03:59:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:37.507 03:59:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:37.507 03:59:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:37.507 03:59:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:37.507 03:59:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:37.507 03:59:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:37.766 03:59:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:37.766 Cannot find device "nvmf_tgt_br" 00:15:37.766 03:59:12 -- nvmf/common.sh@154 -- # true 00:15:37.766 03:59:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:37.766 Cannot find device "nvmf_tgt_br2" 00:15:37.766 03:59:12 -- nvmf/common.sh@155 -- # true 00:15:37.766 03:59:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:37.766 03:59:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:37.766 Cannot find device "nvmf_tgt_br" 00:15:37.766 03:59:12 -- nvmf/common.sh@157 -- # true 00:15:37.766 03:59:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:37.766 Cannot find device "nvmf_tgt_br2" 00:15:37.766 03:59:12 -- nvmf/common.sh@158 -- # true 00:15:37.766 03:59:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:37.766 03:59:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:37.766 03:59:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:37.766 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:37.766 03:59:12 -- nvmf/common.sh@161 -- # true 00:15:37.766 03:59:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:37.766 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:37.766 03:59:12 -- nvmf/common.sh@162 -- # true 00:15:37.766 03:59:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:37.766 03:59:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:37.766 03:59:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:37.766 03:59:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:37.766 03:59:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:37.766 03:59:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:37.766 03:59:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:37.766 03:59:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:37.766 03:59:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:37.766 03:59:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:37.766 03:59:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:37.766 03:59:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:37.766 03:59:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:37.766 03:59:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:38.026 03:59:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:38.026 03:59:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:38.026 03:59:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:38.026 03:59:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:38.026 03:59:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:38.026 03:59:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:38.026 03:59:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:38.026 03:59:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:38.026 03:59:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:38.026 03:59:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:38.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:38.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:15:38.026 00:15:38.026 --- 10.0.0.2 ping statistics --- 00:15:38.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.026 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:15:38.026 03:59:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:38.026 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:38.026 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:15:38.026 00:15:38.026 --- 10.0.0.3 ping statistics --- 00:15:38.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.026 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:38.026 03:59:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:38.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:38.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:15:38.026 00:15:38.026 --- 10.0.0.1 ping statistics --- 00:15:38.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.026 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:15:38.026 03:59:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:38.026 03:59:12 -- nvmf/common.sh@421 -- # return 0 00:15:38.026 03:59:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:38.026 03:59:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:38.026 03:59:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:38.026 03:59:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:38.026 03:59:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:38.026 03:59:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:38.026 03:59:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:38.026 03:59:12 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:38.026 03:59:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:38.026 03:59:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:38.026 03:59:12 -- common/autotest_common.sh@10 -- # set +x 00:15:38.026 03:59:12 -- nvmf/common.sh@469 -- # nvmfpid=74557 00:15:38.026 03:59:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:38.026 03:59:12 -- nvmf/common.sh@470 -- # waitforlisten 74557 00:15:38.026 03:59:12 -- common/autotest_common.sh@829 -- # '[' -z 74557 ']' 00:15:38.026 03:59:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.026 03:59:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:38.026 03:59:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.026 03:59:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:38.026 03:59:12 -- common/autotest_common.sh@10 -- # set +x 00:15:38.026 [2024-11-08 03:59:13.053959] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:38.026 [2024-11-08 03:59:13.054044] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.285 [2024-11-08 03:59:13.191444] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.285 [2024-11-08 03:59:13.260915] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:38.285 [2024-11-08 03:59:13.261055] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.285 [2024-11-08 03:59:13.261068] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.285 [2024-11-08 03:59:13.261076] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.285 [2024-11-08 03:59:13.261105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.222 03:59:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:39.222 03:59:13 -- common/autotest_common.sh@862 -- # return 0 00:15:39.222 03:59:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:39.222 03:59:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:39.222 03:59:13 -- common/autotest_common.sh@10 -- # set +x 00:15:39.222 03:59:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:39.222 03:59:14 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:39.222 03:59:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.222 03:59:14 -- common/autotest_common.sh@10 -- # set +x 00:15:39.222 [2024-11-08 03:59:14.046548] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:39.222 03:59:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.222 03:59:14 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:39.222 03:59:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.222 03:59:14 -- common/autotest_common.sh@10 -- # set +x 00:15:39.222 Malloc0 00:15:39.222 03:59:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.222 03:59:14 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:39.222 03:59:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.222 03:59:14 -- common/autotest_common.sh@10 -- # set +x 00:15:39.222 03:59:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.222 03:59:14 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:39.222 03:59:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.222 03:59:14 -- common/autotest_common.sh@10 -- # set +x 00:15:39.222 03:59:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.222 03:59:14 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:39.222 03:59:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.222 03:59:14 -- common/autotest_common.sh@10 -- # set +x 00:15:39.222 [2024-11-08 03:59:14.102985] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:39.222 03:59:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.222 03:59:14 -- target/queue_depth.sh@30 -- # bdevperf_pid=74607 00:15:39.222 03:59:14 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:39.222 03:59:14 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:39.222 03:59:14 -- target/queue_depth.sh@33 -- # waitforlisten 74607 /var/tmp/bdevperf.sock 00:15:39.222 03:59:14 -- common/autotest_common.sh@829 -- # '[' -z 74607 ']' 00:15:39.222 03:59:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:39.222 03:59:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:39.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:39.222 03:59:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:39.222 03:59:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:39.222 03:59:14 -- common/autotest_common.sh@10 -- # set +x 00:15:39.222 [2024-11-08 03:59:14.168295] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:39.222 [2024-11-08 03:59:14.168393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74607 ] 00:15:39.222 [2024-11-08 03:59:14.309801] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.492 [2024-11-08 03:59:14.419752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.076 03:59:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:40.076 03:59:15 -- common/autotest_common.sh@862 -- # return 0 00:15:40.076 03:59:15 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:40.076 03:59:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.076 03:59:15 -- common/autotest_common.sh@10 -- # set +x 00:15:40.334 NVMe0n1 00:15:40.334 03:59:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.334 03:59:15 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:40.334 Running I/O for 10 seconds... 00:15:50.309 00:15:50.309 Latency(us) 00:15:50.309 [2024-11-08T03:59:25.420Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.309 [2024-11-08T03:59:25.420Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:50.309 Verification LBA range: start 0x0 length 0x4000 00:15:50.309 NVMe0n1 : 10.05 17211.85 67.23 0.00 0.00 59317.10 10724.07 49330.73 00:15:50.309 [2024-11-08T03:59:25.420Z] =================================================================================================================== 00:15:50.309 [2024-11-08T03:59:25.420Z] Total : 17211.85 67.23 0.00 0.00 59317.10 10724.07 49330.73 00:15:50.309 0 00:15:50.309 03:59:25 -- target/queue_depth.sh@39 -- # killprocess 74607 00:15:50.309 03:59:25 -- common/autotest_common.sh@936 -- # '[' -z 74607 ']' 00:15:50.309 03:59:25 -- common/autotest_common.sh@940 -- # kill -0 74607 00:15:50.309 03:59:25 -- common/autotest_common.sh@941 -- # uname 00:15:50.309 03:59:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:50.309 03:59:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74607 00:15:50.309 killing process with pid 74607 00:15:50.309 Received shutdown signal, test time was about 10.000000 seconds 00:15:50.309 00:15:50.309 Latency(us) 00:15:50.309 [2024-11-08T03:59:25.420Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.309 [2024-11-08T03:59:25.420Z] =================================================================================================================== 00:15:50.310 [2024-11-08T03:59:25.421Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:50.310 03:59:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:50.310 03:59:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:50.310 03:59:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74607' 00:15:50.310 03:59:25 -- common/autotest_common.sh@955 -- # kill 74607 00:15:50.310 03:59:25 -- common/autotest_common.sh@960 -- # wait 74607 00:15:50.877 03:59:25 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:50.877 03:59:25 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:50.877 03:59:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:50.877 03:59:25 -- nvmf/common.sh@116 -- # sync 00:15:50.877 03:59:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:50.877 03:59:25 -- nvmf/common.sh@119 -- # set +e 00:15:50.877 03:59:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:50.877 03:59:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:50.877 rmmod nvme_tcp 00:15:50.877 rmmod nvme_fabrics 00:15:50.877 rmmod nvme_keyring 00:15:50.877 03:59:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:50.877 03:59:25 -- nvmf/common.sh@123 -- # set -e 00:15:50.877 03:59:25 -- nvmf/common.sh@124 -- # return 0 00:15:50.877 03:59:25 -- nvmf/common.sh@477 -- # '[' -n 74557 ']' 00:15:50.877 03:59:25 -- nvmf/common.sh@478 -- # killprocess 74557 00:15:50.877 03:59:25 -- common/autotest_common.sh@936 -- # '[' -z 74557 ']' 00:15:50.877 03:59:25 -- common/autotest_common.sh@940 -- # kill -0 74557 00:15:50.877 03:59:25 -- common/autotest_common.sh@941 -- # uname 00:15:50.877 03:59:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:50.877 03:59:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74557 00:15:50.877 killing process with pid 74557 00:15:50.877 03:59:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:50.877 03:59:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:50.877 03:59:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74557' 00:15:50.877 03:59:25 -- common/autotest_common.sh@955 -- # kill 74557 00:15:50.877 03:59:25 -- common/autotest_common.sh@960 -- # wait 74557 00:15:51.136 03:59:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:51.136 03:59:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:51.136 03:59:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:51.136 03:59:26 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:51.136 03:59:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:51.136 03:59:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.136 03:59:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:51.136 03:59:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.395 03:59:26 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:51.395 00:15:51.395 real 0m13.863s 00:15:51.395 user 0m22.774s 00:15:51.395 sys 0m2.678s 00:15:51.395 ************************************ 00:15:51.395 END TEST nvmf_queue_depth 00:15:51.395 ************************************ 00:15:51.395 03:59:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:51.395 03:59:26 -- common/autotest_common.sh@10 -- # set +x 00:15:51.395 03:59:26 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:51.395 03:59:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:51.395 03:59:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:51.395 03:59:26 -- common/autotest_common.sh@10 -- # set +x 00:15:51.395 ************************************ 00:15:51.395 START TEST nvmf_multipath 00:15:51.395 ************************************ 00:15:51.395 03:59:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:51.395 * Looking for test storage... 00:15:51.395 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:51.395 03:59:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:51.395 03:59:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:51.395 03:59:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:51.395 03:59:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:51.395 03:59:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:51.395 03:59:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:51.395 03:59:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:51.395 03:59:26 -- scripts/common.sh@335 -- # IFS=.-: 00:15:51.395 03:59:26 -- scripts/common.sh@335 -- # read -ra ver1 00:15:51.395 03:59:26 -- scripts/common.sh@336 -- # IFS=.-: 00:15:51.395 03:59:26 -- scripts/common.sh@336 -- # read -ra ver2 00:15:51.395 03:59:26 -- scripts/common.sh@337 -- # local 'op=<' 00:15:51.395 03:59:26 -- scripts/common.sh@339 -- # ver1_l=2 00:15:51.395 03:59:26 -- scripts/common.sh@340 -- # ver2_l=1 00:15:51.395 03:59:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:51.395 03:59:26 -- scripts/common.sh@343 -- # case "$op" in 00:15:51.395 03:59:26 -- scripts/common.sh@344 -- # : 1 00:15:51.395 03:59:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:51.395 03:59:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:51.395 03:59:26 -- scripts/common.sh@364 -- # decimal 1 00:15:51.395 03:59:26 -- scripts/common.sh@352 -- # local d=1 00:15:51.395 03:59:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:51.395 03:59:26 -- scripts/common.sh@354 -- # echo 1 00:15:51.395 03:59:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:51.395 03:59:26 -- scripts/common.sh@365 -- # decimal 2 00:15:51.395 03:59:26 -- scripts/common.sh@352 -- # local d=2 00:15:51.395 03:59:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:51.395 03:59:26 -- scripts/common.sh@354 -- # echo 2 00:15:51.395 03:59:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:51.395 03:59:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:51.395 03:59:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:51.395 03:59:26 -- scripts/common.sh@367 -- # return 0 00:15:51.395 03:59:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:51.395 03:59:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:51.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.395 --rc genhtml_branch_coverage=1 00:15:51.395 --rc genhtml_function_coverage=1 00:15:51.395 --rc genhtml_legend=1 00:15:51.395 --rc geninfo_all_blocks=1 00:15:51.395 --rc geninfo_unexecuted_blocks=1 00:15:51.395 00:15:51.395 ' 00:15:51.395 03:59:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:51.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.395 --rc genhtml_branch_coverage=1 00:15:51.395 --rc genhtml_function_coverage=1 00:15:51.395 --rc genhtml_legend=1 00:15:51.395 --rc geninfo_all_blocks=1 00:15:51.395 --rc geninfo_unexecuted_blocks=1 00:15:51.395 00:15:51.395 ' 00:15:51.395 03:59:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:51.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.395 --rc genhtml_branch_coverage=1 00:15:51.395 --rc genhtml_function_coverage=1 00:15:51.395 --rc genhtml_legend=1 00:15:51.395 --rc geninfo_all_blocks=1 00:15:51.395 --rc geninfo_unexecuted_blocks=1 00:15:51.395 00:15:51.395 ' 00:15:51.395 03:59:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:51.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.395 --rc genhtml_branch_coverage=1 00:15:51.395 --rc genhtml_function_coverage=1 00:15:51.395 --rc genhtml_legend=1 00:15:51.395 --rc geninfo_all_blocks=1 00:15:51.395 --rc geninfo_unexecuted_blocks=1 00:15:51.395 00:15:51.395 ' 00:15:51.395 03:59:26 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:51.395 03:59:26 -- nvmf/common.sh@7 -- # uname -s 00:15:51.395 03:59:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:51.395 03:59:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:51.395 03:59:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:51.395 03:59:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:51.395 03:59:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:51.395 03:59:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:51.395 03:59:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:51.395 03:59:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:51.395 03:59:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:51.395 03:59:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:51.395 03:59:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:15:51.395 03:59:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:15:51.395 03:59:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:51.395 03:59:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:51.395 03:59:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:51.395 03:59:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:51.395 03:59:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:51.395 03:59:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:51.395 03:59:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:51.395 03:59:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.395 03:59:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.395 03:59:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.395 03:59:26 -- paths/export.sh@5 -- # export PATH 00:15:51.395 03:59:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.395 03:59:26 -- nvmf/common.sh@46 -- # : 0 00:15:51.395 03:59:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:51.395 03:59:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:51.395 03:59:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:51.395 03:59:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:51.395 03:59:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:51.395 03:59:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:51.395 03:59:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:51.395 03:59:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:51.395 03:59:26 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:51.395 03:59:26 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:51.395 03:59:26 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:51.396 03:59:26 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:51.396 03:59:26 -- target/multipath.sh@43 -- # nvmftestinit 00:15:51.396 03:59:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:51.396 03:59:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:51.396 03:59:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:51.396 03:59:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:51.396 03:59:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:51.396 03:59:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.396 03:59:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:51.654 03:59:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.654 03:59:26 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:51.654 03:59:26 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:51.654 03:59:26 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:51.654 03:59:26 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:51.654 03:59:26 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:51.654 03:59:26 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:51.654 03:59:26 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:51.654 03:59:26 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:51.654 03:59:26 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:51.654 03:59:26 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:51.654 03:59:26 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:51.654 03:59:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:51.654 03:59:26 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:51.654 03:59:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:51.654 03:59:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:51.654 03:59:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:51.654 03:59:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:51.654 03:59:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:51.654 03:59:26 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:51.654 03:59:26 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:51.654 Cannot find device "nvmf_tgt_br" 00:15:51.654 03:59:26 -- nvmf/common.sh@154 -- # true 00:15:51.654 03:59:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:51.654 Cannot find device "nvmf_tgt_br2" 00:15:51.654 03:59:26 -- nvmf/common.sh@155 -- # true 00:15:51.654 03:59:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:51.654 03:59:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:51.654 Cannot find device "nvmf_tgt_br" 00:15:51.654 03:59:26 -- nvmf/common.sh@157 -- # true 00:15:51.654 03:59:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:51.654 Cannot find device "nvmf_tgt_br2" 00:15:51.654 03:59:26 -- nvmf/common.sh@158 -- # true 00:15:51.654 03:59:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:51.654 03:59:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:51.654 03:59:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:51.654 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:51.654 03:59:26 -- nvmf/common.sh@161 -- # true 00:15:51.655 03:59:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:51.655 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:51.655 03:59:26 -- nvmf/common.sh@162 -- # true 00:15:51.655 03:59:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:51.655 03:59:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:51.655 03:59:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:51.655 03:59:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:51.655 03:59:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:51.655 03:59:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:51.655 03:59:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:51.655 03:59:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:51.655 03:59:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:51.655 03:59:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:51.655 03:59:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:51.655 03:59:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:51.655 03:59:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:51.655 03:59:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:51.655 03:59:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:51.655 03:59:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:51.655 03:59:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:51.655 03:59:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:51.655 03:59:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:51.914 03:59:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:51.914 03:59:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:51.914 03:59:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:51.914 03:59:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:51.914 03:59:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:51.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:51.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:15:51.914 00:15:51.914 --- 10.0.0.2 ping statistics --- 00:15:51.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.914 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:15:51.914 03:59:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:51.914 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:51.914 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:15:51.914 00:15:51.914 --- 10.0.0.3 ping statistics --- 00:15:51.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.914 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:15:51.914 03:59:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:51.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:51.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:15:51.914 00:15:51.914 --- 10.0.0.1 ping statistics --- 00:15:51.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.914 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:15:51.914 03:59:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:51.914 03:59:26 -- nvmf/common.sh@421 -- # return 0 00:15:51.914 03:59:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:51.914 03:59:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:51.914 03:59:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:51.914 03:59:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:51.914 03:59:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:51.914 03:59:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:51.914 03:59:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:51.914 03:59:26 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:15:51.914 03:59:26 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:15:51.914 03:59:26 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:15:51.914 03:59:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:51.914 03:59:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:51.914 03:59:26 -- common/autotest_common.sh@10 -- # set +x 00:15:51.914 03:59:26 -- nvmf/common.sh@469 -- # nvmfpid=74940 00:15:51.914 03:59:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:51.914 03:59:26 -- nvmf/common.sh@470 -- # waitforlisten 74940 00:15:51.914 03:59:26 -- common/autotest_common.sh@829 -- # '[' -z 74940 ']' 00:15:51.914 03:59:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.914 03:59:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:51.914 03:59:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.914 03:59:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:51.914 03:59:26 -- common/autotest_common.sh@10 -- # set +x 00:15:51.914 [2024-11-08 03:59:26.909127] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:51.914 [2024-11-08 03:59:26.909211] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.173 [2024-11-08 03:59:27.051409] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:52.173 [2024-11-08 03:59:27.163783] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:52.173 [2024-11-08 03:59:27.163971] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:52.173 [2024-11-08 03:59:27.163989] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:52.173 [2024-11-08 03:59:27.164000] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:52.173 [2024-11-08 03:59:27.164170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.173 [2024-11-08 03:59:27.164311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:52.173 [2024-11-08 03:59:27.166044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:52.173 [2024-11-08 03:59:27.166104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.118 03:59:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:53.118 03:59:27 -- common/autotest_common.sh@862 -- # return 0 00:15:53.118 03:59:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:53.118 03:59:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:53.118 03:59:27 -- common/autotest_common.sh@10 -- # set +x 00:15:53.118 03:59:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:53.118 03:59:27 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:53.376 [2024-11-08 03:59:28.237588] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:53.376 03:59:28 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:53.634 Malloc0 00:15:53.634 03:59:28 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:15:53.892 03:59:28 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:54.150 03:59:29 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:54.150 [2024-11-08 03:59:29.190323] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:54.150 03:59:29 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:54.407 [2024-11-08 03:59:29.414575] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:54.407 03:59:29 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:15:54.665 03:59:29 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:15:54.923 03:59:29 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:15:54.923 03:59:29 -- common/autotest_common.sh@1187 -- # local i=0 00:15:54.923 03:59:29 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:54.923 03:59:29 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:54.923 03:59:29 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:56.823 03:59:31 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:56.823 03:59:31 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:56.823 03:59:31 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:56.823 03:59:31 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:56.823 03:59:31 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:56.823 03:59:31 -- common/autotest_common.sh@1197 -- # return 0 00:15:56.823 03:59:31 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:15:56.823 03:59:31 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:15:56.823 03:59:31 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:15:56.823 03:59:31 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:56.823 03:59:31 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:15:56.823 03:59:31 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:15:56.823 03:59:31 -- target/multipath.sh@38 -- # return 0 00:15:56.823 03:59:31 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:15:56.823 03:59:31 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:15:56.823 03:59:31 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:15:56.823 03:59:31 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:15:56.823 03:59:31 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:15:56.823 03:59:31 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:15:56.823 03:59:31 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:15:56.823 03:59:31 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:56.823 03:59:31 -- target/multipath.sh@22 -- # local timeout=20 00:15:56.823 03:59:31 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:56.823 03:59:31 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:56.823 03:59:31 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:56.823 03:59:31 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:15:56.823 03:59:31 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:56.823 03:59:31 -- target/multipath.sh@22 -- # local timeout=20 00:15:56.823 03:59:31 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:56.823 03:59:31 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:56.823 03:59:31 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:56.823 03:59:31 -- target/multipath.sh@85 -- # echo numa 00:15:56.823 03:59:31 -- target/multipath.sh@88 -- # fio_pid=75083 00:15:56.823 03:59:31 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:56.823 03:59:31 -- target/multipath.sh@90 -- # sleep 1 00:15:56.823 [global] 00:15:56.823 thread=1 00:15:56.823 invalidate=1 00:15:56.823 rw=randrw 00:15:56.823 time_based=1 00:15:56.823 runtime=6 00:15:56.823 ioengine=libaio 00:15:56.823 direct=1 00:15:56.823 bs=4096 00:15:56.823 iodepth=128 00:15:56.823 norandommap=0 00:15:56.823 numjobs=1 00:15:56.823 00:15:56.823 verify_dump=1 00:15:56.823 verify_backlog=512 00:15:56.823 verify_state_save=0 00:15:56.823 do_verify=1 00:15:56.823 verify=crc32c-intel 00:15:56.823 [job0] 00:15:56.823 filename=/dev/nvme0n1 00:15:57.082 Could not set queue depth (nvme0n1) 00:15:57.082 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:57.082 fio-3.35 00:15:57.082 Starting 1 thread 00:15:58.027 03:59:32 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:58.286 03:59:33 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:58.544 03:59:33 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:15:58.544 03:59:33 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:58.544 03:59:33 -- target/multipath.sh@22 -- # local timeout=20 00:15:58.544 03:59:33 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:58.544 03:59:33 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:58.544 03:59:33 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:58.544 03:59:33 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:15:58.544 03:59:33 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:58.544 03:59:33 -- target/multipath.sh@22 -- # local timeout=20 00:15:58.544 03:59:33 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:58.544 03:59:33 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:58.544 03:59:33 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:58.544 03:59:33 -- target/multipath.sh@25 -- # sleep 1s 00:15:59.479 03:59:34 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:59.479 03:59:34 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:59.479 03:59:34 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:59.479 03:59:34 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:59.736 03:59:34 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:59.995 03:59:35 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:15:59.995 03:59:35 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:59.995 03:59:35 -- target/multipath.sh@22 -- # local timeout=20 00:15:59.995 03:59:35 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:59.995 03:59:35 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:59.995 03:59:35 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:59.995 03:59:35 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:15:59.995 03:59:35 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:59.995 03:59:35 -- target/multipath.sh@22 -- # local timeout=20 00:15:59.995 03:59:35 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:59.995 03:59:35 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:59.995 03:59:35 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:59.995 03:59:35 -- target/multipath.sh@25 -- # sleep 1s 00:16:00.929 03:59:36 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:16:00.929 03:59:36 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:00.929 03:59:36 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:00.929 03:59:36 -- target/multipath.sh@104 -- # wait 75083 00:16:03.463 00:16:03.463 job0: (groupid=0, jobs=1): err= 0: pid=75104: Fri Nov 8 03:59:38 2024 00:16:03.463 read: IOPS=13.4k, BW=52.3MiB/s (54.8MB/s)(314MiB/6005msec) 00:16:03.463 slat (usec): min=3, max=5311, avg=42.29, stdev=187.92 00:16:03.464 clat (usec): min=1052, max=13235, avg=6583.82, stdev=1095.12 00:16:03.464 lat (usec): min=1094, max=13244, avg=6626.12, stdev=1101.46 00:16:03.464 clat percentiles (usec): 00:16:03.464 | 1.00th=[ 4113], 5.00th=[ 5014], 10.00th=[ 5407], 20.00th=[ 5800], 00:16:03.464 | 30.00th=[ 6063], 40.00th=[ 6259], 50.00th=[ 6456], 60.00th=[ 6718], 00:16:03.464 | 70.00th=[ 6980], 80.00th=[ 7373], 90.00th=[ 7898], 95.00th=[ 8586], 00:16:03.464 | 99.00th=[10028], 99.50th=[10421], 99.90th=[11469], 99.95th=[11994], 00:16:03.464 | 99.99th=[12780] 00:16:03.464 bw ( KiB/s): min=11776, max=35216, per=51.49%, avg=27568.73, stdev=7202.08, samples=11 00:16:03.464 iops : min= 2944, max= 8804, avg=6892.18, stdev=1800.52, samples=11 00:16:03.464 write: IOPS=7718, BW=30.1MiB/s (31.6MB/s)(158MiB/5252msec); 0 zone resets 00:16:03.464 slat (usec): min=14, max=3041, avg=54.54, stdev=129.58 00:16:03.464 clat (usec): min=425, max=13264, avg=5731.84, stdev=977.56 00:16:03.464 lat (usec): min=896, max=13288, avg=5786.37, stdev=981.66 00:16:03.464 clat percentiles (usec): 00:16:03.464 | 1.00th=[ 3195], 5.00th=[ 4080], 10.00th=[ 4686], 20.00th=[ 5080], 00:16:03.464 | 30.00th=[ 5342], 40.00th=[ 5538], 50.00th=[ 5735], 60.00th=[ 5932], 00:16:03.464 | 70.00th=[ 6128], 80.00th=[ 6390], 90.00th=[ 6718], 95.00th=[ 7177], 00:16:03.464 | 99.00th=[ 8979], 99.50th=[ 9634], 99.90th=[10552], 99.95th=[11338], 00:16:03.464 | 99.99th=[12911] 00:16:03.464 bw ( KiB/s): min=12464, max=34624, per=89.21%, avg=27541.09, stdev=6878.82, samples=11 00:16:03.464 iops : min= 3116, max= 8656, avg=6885.27, stdev=1719.71, samples=11 00:16:03.464 lat (usec) : 500=0.01%, 1000=0.01% 00:16:03.464 lat (msec) : 2=0.03%, 4=2.04%, 10=97.16%, 20=0.76% 00:16:03.464 cpu : usr=5.60%, sys=25.76%, ctx=7349, majf=0, minf=108 00:16:03.464 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:16:03.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:03.464 issued rwts: total=80374,40535,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:03.464 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:03.464 00:16:03.464 Run status group 0 (all jobs): 00:16:03.464 READ: bw=52.3MiB/s (54.8MB/s), 52.3MiB/s-52.3MiB/s (54.8MB/s-54.8MB/s), io=314MiB (329MB), run=6005-6005msec 00:16:03.464 WRITE: bw=30.1MiB/s (31.6MB/s), 30.1MiB/s-30.1MiB/s (31.6MB/s-31.6MB/s), io=158MiB (166MB), run=5252-5252msec 00:16:03.464 00:16:03.464 Disk stats (read/write): 00:16:03.464 nvme0n1: ios=78959/39917, merge=0/0, ticks=484417/211633, in_queue=696050, util=98.63% 00:16:03.464 03:59:38 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:16:03.724 03:59:38 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:16:03.982 03:59:38 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:16:03.982 03:59:38 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:16:03.982 03:59:38 -- target/multipath.sh@22 -- # local timeout=20 00:16:03.982 03:59:38 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:03.982 03:59:38 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:03.982 03:59:38 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:16:03.982 03:59:38 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:16:03.982 03:59:38 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:16:03.982 03:59:38 -- target/multipath.sh@22 -- # local timeout=20 00:16:03.982 03:59:38 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:03.982 03:59:38 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:03.982 03:59:38 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:16:03.982 03:59:38 -- target/multipath.sh@25 -- # sleep 1s 00:16:04.916 03:59:39 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:16:04.916 03:59:39 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:04.916 03:59:39 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:16:04.916 03:59:39 -- target/multipath.sh@113 -- # echo round-robin 00:16:04.916 03:59:39 -- target/multipath.sh@116 -- # fio_pid=75240 00:16:04.916 03:59:39 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:16:04.916 03:59:39 -- target/multipath.sh@118 -- # sleep 1 00:16:04.916 [global] 00:16:04.916 thread=1 00:16:04.916 invalidate=1 00:16:04.916 rw=randrw 00:16:04.916 time_based=1 00:16:04.916 runtime=6 00:16:04.916 ioengine=libaio 00:16:04.916 direct=1 00:16:04.916 bs=4096 00:16:04.916 iodepth=128 00:16:04.916 norandommap=0 00:16:04.916 numjobs=1 00:16:04.916 00:16:04.916 verify_dump=1 00:16:04.916 verify_backlog=512 00:16:04.916 verify_state_save=0 00:16:04.916 do_verify=1 00:16:04.916 verify=crc32c-intel 00:16:04.916 [job0] 00:16:04.916 filename=/dev/nvme0n1 00:16:04.916 Could not set queue depth (nvme0n1) 00:16:05.174 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:05.174 fio-3.35 00:16:05.174 Starting 1 thread 00:16:06.109 03:59:40 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:06.109 03:59:41 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:06.367 03:59:41 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:16:06.367 03:59:41 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:16:06.367 03:59:41 -- target/multipath.sh@22 -- # local timeout=20 00:16:06.367 03:59:41 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:06.367 03:59:41 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:06.367 03:59:41 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:06.367 03:59:41 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:16:06.367 03:59:41 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:16:06.367 03:59:41 -- target/multipath.sh@22 -- # local timeout=20 00:16:06.367 03:59:41 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:06.367 03:59:41 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:06.367 03:59:41 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:16:06.367 03:59:41 -- target/multipath.sh@25 -- # sleep 1s 00:16:07.742 03:59:42 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:16:07.742 03:59:42 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:07.742 03:59:42 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:16:07.742 03:59:42 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:07.742 03:59:42 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:08.000 03:59:42 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:16:08.000 03:59:42 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:16:08.000 03:59:42 -- target/multipath.sh@22 -- # local timeout=20 00:16:08.000 03:59:42 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:08.000 03:59:42 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:08.000 03:59:42 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:16:08.000 03:59:42 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:16:08.000 03:59:42 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:16:08.000 03:59:42 -- target/multipath.sh@22 -- # local timeout=20 00:16:08.000 03:59:42 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:08.000 03:59:42 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:08.000 03:59:42 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:08.000 03:59:42 -- target/multipath.sh@25 -- # sleep 1s 00:16:08.934 03:59:43 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:16:08.934 03:59:43 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:08.934 03:59:43 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:08.934 03:59:43 -- target/multipath.sh@132 -- # wait 75240 00:16:11.463 00:16:11.463 job0: (groupid=0, jobs=1): err= 0: pid=75261: Fri Nov 8 03:59:46 2024 00:16:11.463 read: IOPS=13.2k, BW=51.6MiB/s (54.1MB/s)(310MiB/6003msec) 00:16:11.463 slat (usec): min=2, max=6496, avg=37.26, stdev=183.80 00:16:11.463 clat (usec): min=570, max=16270, avg=6735.58, stdev=1626.96 00:16:11.463 lat (usec): min=581, max=16276, avg=6772.83, stdev=1630.36 00:16:11.463 clat percentiles (usec): 00:16:11.463 | 1.00th=[ 2442], 5.00th=[ 4080], 10.00th=[ 5014], 20.00th=[ 5735], 00:16:11.463 | 30.00th=[ 6063], 40.00th=[ 6325], 50.00th=[ 6652], 60.00th=[ 6980], 00:16:11.463 | 70.00th=[ 7242], 80.00th=[ 7701], 90.00th=[ 8586], 95.00th=[ 9634], 00:16:11.463 | 99.00th=[11469], 99.50th=[12256], 99.90th=[14222], 99.95th=[14877], 00:16:11.463 | 99.99th=[15795] 00:16:11.463 bw ( KiB/s): min=12640, max=35584, per=52.52%, avg=27751.27, stdev=6950.53, samples=11 00:16:11.463 iops : min= 3160, max= 8896, avg=6937.82, stdev=1737.63, samples=11 00:16:11.463 write: IOPS=7643, BW=29.9MiB/s (31.3MB/s)(155MiB/5192msec); 0 zone resets 00:16:11.463 slat (usec): min=2, max=4821, avg=48.32, stdev=126.05 00:16:11.463 clat (usec): min=602, max=13731, avg=5738.72, stdev=1413.93 00:16:11.463 lat (usec): min=638, max=13756, avg=5787.05, stdev=1417.64 00:16:11.463 clat percentiles (usec): 00:16:11.463 | 1.00th=[ 2180], 5.00th=[ 3032], 10.00th=[ 3687], 20.00th=[ 4948], 00:16:11.463 | 30.00th=[ 5342], 40.00th=[ 5669], 50.00th=[ 5866], 60.00th=[ 6063], 00:16:11.463 | 70.00th=[ 6259], 80.00th=[ 6587], 90.00th=[ 7177], 95.00th=[ 8029], 00:16:11.463 | 99.00th=[ 9634], 99.50th=[10290], 99.90th=[11600], 99.95th=[12125], 00:16:11.463 | 99.99th=[13435] 00:16:11.463 bw ( KiB/s): min=13344, max=34736, per=90.70%, avg=27730.91, stdev=6529.98, samples=11 00:16:11.463 iops : min= 3336, max= 8684, avg=6932.73, stdev=1632.49, samples=11 00:16:11.463 lat (usec) : 750=0.01%, 1000=0.04% 00:16:11.463 lat (msec) : 2=0.47%, 4=6.75%, 10=89.87%, 20=2.85% 00:16:11.463 cpu : usr=6.10%, sys=23.24%, ctx=7324, majf=0, minf=127 00:16:11.463 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:16:11.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.463 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:11.463 issued rwts: total=79305,39683,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.463 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.463 00:16:11.463 Run status group 0 (all jobs): 00:16:11.463 READ: bw=51.6MiB/s (54.1MB/s), 51.6MiB/s-51.6MiB/s (54.1MB/s-54.1MB/s), io=310MiB (325MB), run=6003-6003msec 00:16:11.463 WRITE: bw=29.9MiB/s (31.3MB/s), 29.9MiB/s-29.9MiB/s (31.3MB/s-31.3MB/s), io=155MiB (163MB), run=5192-5192msec 00:16:11.463 00:16:11.463 Disk stats (read/write): 00:16:11.463 nvme0n1: ios=78029/39154, merge=0/0, ticks=490126/209077, in_queue=699203, util=98.67% 00:16:11.463 03:59:46 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:11.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:11.463 03:59:46 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:11.464 03:59:46 -- common/autotest_common.sh@1208 -- # local i=0 00:16:11.464 03:59:46 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:16:11.464 03:59:46 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:11.464 03:59:46 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:16:11.464 03:59:46 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:11.464 03:59:46 -- common/autotest_common.sh@1220 -- # return 0 00:16:11.464 03:59:46 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:11.722 03:59:46 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:16:11.722 03:59:46 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:16:11.722 03:59:46 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:16:11.722 03:59:46 -- target/multipath.sh@144 -- # nvmftestfini 00:16:11.722 03:59:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:11.722 03:59:46 -- nvmf/common.sh@116 -- # sync 00:16:11.722 03:59:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:11.722 03:59:46 -- nvmf/common.sh@119 -- # set +e 00:16:11.722 03:59:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:11.722 03:59:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:11.722 rmmod nvme_tcp 00:16:11.722 rmmod nvme_fabrics 00:16:11.722 rmmod nvme_keyring 00:16:11.722 03:59:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:11.722 03:59:46 -- nvmf/common.sh@123 -- # set -e 00:16:11.722 03:59:46 -- nvmf/common.sh@124 -- # return 0 00:16:11.722 03:59:46 -- nvmf/common.sh@477 -- # '[' -n 74940 ']' 00:16:11.722 03:59:46 -- nvmf/common.sh@478 -- # killprocess 74940 00:16:11.722 03:59:46 -- common/autotest_common.sh@936 -- # '[' -z 74940 ']' 00:16:11.722 03:59:46 -- common/autotest_common.sh@940 -- # kill -0 74940 00:16:11.722 03:59:46 -- common/autotest_common.sh@941 -- # uname 00:16:11.722 03:59:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:11.722 03:59:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74940 00:16:11.722 killing process with pid 74940 00:16:11.722 03:59:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:11.722 03:59:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:11.722 03:59:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74940' 00:16:11.722 03:59:46 -- common/autotest_common.sh@955 -- # kill 74940 00:16:11.722 03:59:46 -- common/autotest_common.sh@960 -- # wait 74940 00:16:12.288 03:59:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:12.288 03:59:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:12.288 03:59:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:12.288 03:59:47 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:12.288 03:59:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:12.288 03:59:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.288 03:59:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.288 03:59:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.288 03:59:47 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:12.288 ************************************ 00:16:12.289 END TEST nvmf_multipath 00:16:12.289 ************************************ 00:16:12.289 00:16:12.289 real 0m20.893s 00:16:12.289 user 1m21.626s 00:16:12.289 sys 0m6.443s 00:16:12.289 03:59:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:12.289 03:59:47 -- common/autotest_common.sh@10 -- # set +x 00:16:12.289 03:59:47 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:12.289 03:59:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:12.289 03:59:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:12.289 03:59:47 -- common/autotest_common.sh@10 -- # set +x 00:16:12.289 ************************************ 00:16:12.289 START TEST nvmf_zcopy 00:16:12.289 ************************************ 00:16:12.289 03:59:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:12.289 * Looking for test storage... 00:16:12.289 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:12.289 03:59:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:12.289 03:59:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:12.289 03:59:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:12.547 03:59:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:12.547 03:59:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:12.547 03:59:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:12.547 03:59:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:12.547 03:59:47 -- scripts/common.sh@335 -- # IFS=.-: 00:16:12.547 03:59:47 -- scripts/common.sh@335 -- # read -ra ver1 00:16:12.547 03:59:47 -- scripts/common.sh@336 -- # IFS=.-: 00:16:12.547 03:59:47 -- scripts/common.sh@336 -- # read -ra ver2 00:16:12.547 03:59:47 -- scripts/common.sh@337 -- # local 'op=<' 00:16:12.547 03:59:47 -- scripts/common.sh@339 -- # ver1_l=2 00:16:12.547 03:59:47 -- scripts/common.sh@340 -- # ver2_l=1 00:16:12.547 03:59:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:12.547 03:59:47 -- scripts/common.sh@343 -- # case "$op" in 00:16:12.547 03:59:47 -- scripts/common.sh@344 -- # : 1 00:16:12.547 03:59:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:12.547 03:59:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:12.547 03:59:47 -- scripts/common.sh@364 -- # decimal 1 00:16:12.547 03:59:47 -- scripts/common.sh@352 -- # local d=1 00:16:12.547 03:59:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:12.547 03:59:47 -- scripts/common.sh@354 -- # echo 1 00:16:12.547 03:59:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:12.547 03:59:47 -- scripts/common.sh@365 -- # decimal 2 00:16:12.547 03:59:47 -- scripts/common.sh@352 -- # local d=2 00:16:12.547 03:59:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:12.547 03:59:47 -- scripts/common.sh@354 -- # echo 2 00:16:12.547 03:59:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:12.547 03:59:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:12.547 03:59:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:12.547 03:59:47 -- scripts/common.sh@367 -- # return 0 00:16:12.547 03:59:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:12.547 03:59:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:12.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.547 --rc genhtml_branch_coverage=1 00:16:12.547 --rc genhtml_function_coverage=1 00:16:12.547 --rc genhtml_legend=1 00:16:12.547 --rc geninfo_all_blocks=1 00:16:12.547 --rc geninfo_unexecuted_blocks=1 00:16:12.547 00:16:12.547 ' 00:16:12.547 03:59:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:12.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.547 --rc genhtml_branch_coverage=1 00:16:12.547 --rc genhtml_function_coverage=1 00:16:12.547 --rc genhtml_legend=1 00:16:12.547 --rc geninfo_all_blocks=1 00:16:12.547 --rc geninfo_unexecuted_blocks=1 00:16:12.547 00:16:12.547 ' 00:16:12.547 03:59:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:12.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.547 --rc genhtml_branch_coverage=1 00:16:12.547 --rc genhtml_function_coverage=1 00:16:12.547 --rc genhtml_legend=1 00:16:12.547 --rc geninfo_all_blocks=1 00:16:12.547 --rc geninfo_unexecuted_blocks=1 00:16:12.547 00:16:12.547 ' 00:16:12.547 03:59:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:12.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.547 --rc genhtml_branch_coverage=1 00:16:12.547 --rc genhtml_function_coverage=1 00:16:12.547 --rc genhtml_legend=1 00:16:12.547 --rc geninfo_all_blocks=1 00:16:12.547 --rc geninfo_unexecuted_blocks=1 00:16:12.547 00:16:12.547 ' 00:16:12.547 03:59:47 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:12.547 03:59:47 -- nvmf/common.sh@7 -- # uname -s 00:16:12.547 03:59:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:12.547 03:59:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:12.547 03:59:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:12.547 03:59:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:12.547 03:59:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:12.547 03:59:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:12.547 03:59:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:12.547 03:59:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:12.547 03:59:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:12.547 03:59:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:12.547 03:59:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:16:12.547 03:59:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:16:12.547 03:59:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:12.547 03:59:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:12.547 03:59:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:12.547 03:59:47 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:12.547 03:59:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:12.547 03:59:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:12.547 03:59:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:12.547 03:59:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.547 03:59:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.547 03:59:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.547 03:59:47 -- paths/export.sh@5 -- # export PATH 00:16:12.547 03:59:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.547 03:59:47 -- nvmf/common.sh@46 -- # : 0 00:16:12.547 03:59:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:12.547 03:59:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:12.547 03:59:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:12.547 03:59:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:12.547 03:59:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:12.547 03:59:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:12.547 03:59:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:12.547 03:59:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:12.547 03:59:47 -- target/zcopy.sh@12 -- # nvmftestinit 00:16:12.547 03:59:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:12.547 03:59:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:12.547 03:59:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:12.547 03:59:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:12.547 03:59:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:12.547 03:59:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.547 03:59:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.547 03:59:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.547 03:59:47 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:12.547 03:59:47 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:12.547 03:59:47 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:12.547 03:59:47 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:12.547 03:59:47 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:12.547 03:59:47 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:12.547 03:59:47 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:12.548 03:59:47 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:12.548 03:59:47 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:12.548 03:59:47 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:12.548 03:59:47 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:12.548 03:59:47 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:12.548 03:59:47 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:12.548 03:59:47 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:12.548 03:59:47 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:12.548 03:59:47 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:12.548 03:59:47 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:12.548 03:59:47 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:12.548 03:59:47 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:12.548 03:59:47 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:12.548 Cannot find device "nvmf_tgt_br" 00:16:12.548 03:59:47 -- nvmf/common.sh@154 -- # true 00:16:12.548 03:59:47 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:12.548 Cannot find device "nvmf_tgt_br2" 00:16:12.548 03:59:47 -- nvmf/common.sh@155 -- # true 00:16:12.548 03:59:47 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:12.548 03:59:47 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:12.548 Cannot find device "nvmf_tgt_br" 00:16:12.548 03:59:47 -- nvmf/common.sh@157 -- # true 00:16:12.548 03:59:47 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:12.548 Cannot find device "nvmf_tgt_br2" 00:16:12.548 03:59:47 -- nvmf/common.sh@158 -- # true 00:16:12.548 03:59:47 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:12.548 03:59:47 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:12.548 03:59:47 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:12.548 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:12.548 03:59:47 -- nvmf/common.sh@161 -- # true 00:16:12.548 03:59:47 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:12.548 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:12.548 03:59:47 -- nvmf/common.sh@162 -- # true 00:16:12.548 03:59:47 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:12.548 03:59:47 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:12.548 03:59:47 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:12.548 03:59:47 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:12.806 03:59:47 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:12.806 03:59:47 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:12.806 03:59:47 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:12.806 03:59:47 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:12.806 03:59:47 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:12.806 03:59:47 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:12.806 03:59:47 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:12.806 03:59:47 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:12.806 03:59:47 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:12.806 03:59:47 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:12.806 03:59:47 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:12.806 03:59:47 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:12.806 03:59:47 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:12.806 03:59:47 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:12.806 03:59:47 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:12.806 03:59:47 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:12.806 03:59:47 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:12.806 03:59:47 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:12.806 03:59:47 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:12.806 03:59:47 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:12.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:12.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:16:12.806 00:16:12.806 --- 10.0.0.2 ping statistics --- 00:16:12.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.806 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:16:12.806 03:59:47 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:12.806 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:12.806 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:16:12.806 00:16:12.806 --- 10.0.0.3 ping statistics --- 00:16:12.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.806 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:16:12.806 03:59:47 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:12.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:12.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms 00:16:12.806 00:16:12.806 --- 10.0.0.1 ping statistics --- 00:16:12.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.806 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 00:16:12.806 03:59:47 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:12.806 03:59:47 -- nvmf/common.sh@421 -- # return 0 00:16:12.806 03:59:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:12.806 03:59:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:12.806 03:59:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:12.806 03:59:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:12.806 03:59:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:12.806 03:59:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:12.806 03:59:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:12.806 03:59:47 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:12.806 03:59:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:12.806 03:59:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:12.806 03:59:47 -- common/autotest_common.sh@10 -- # set +x 00:16:12.806 03:59:47 -- nvmf/common.sh@469 -- # nvmfpid=75552 00:16:12.806 03:59:47 -- nvmf/common.sh@470 -- # waitforlisten 75552 00:16:12.806 03:59:47 -- common/autotest_common.sh@829 -- # '[' -z 75552 ']' 00:16:12.806 03:59:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.806 03:59:47 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:12.806 03:59:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:12.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.806 03:59:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.806 03:59:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:12.806 03:59:47 -- common/autotest_common.sh@10 -- # set +x 00:16:12.806 [2024-11-08 03:59:47.901705] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:12.806 [2024-11-08 03:59:47.901792] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:13.065 [2024-11-08 03:59:48.043040] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.065 [2024-11-08 03:59:48.137679] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:13.065 [2024-11-08 03:59:48.137876] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:13.065 [2024-11-08 03:59:48.137895] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:13.065 [2024-11-08 03:59:48.137907] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:13.065 [2024-11-08 03:59:48.137952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:14.001 03:59:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:14.001 03:59:48 -- common/autotest_common.sh@862 -- # return 0 00:16:14.001 03:59:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:14.001 03:59:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:14.001 03:59:48 -- common/autotest_common.sh@10 -- # set +x 00:16:14.001 03:59:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:14.001 03:59:48 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:14.001 03:59:48 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:14.001 03:59:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.001 03:59:48 -- common/autotest_common.sh@10 -- # set +x 00:16:14.001 [2024-11-08 03:59:48.860287] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:14.001 03:59:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.001 03:59:48 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:14.001 03:59:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.001 03:59:48 -- common/autotest_common.sh@10 -- # set +x 00:16:14.001 03:59:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.001 03:59:48 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:14.001 03:59:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.001 03:59:48 -- common/autotest_common.sh@10 -- # set +x 00:16:14.001 [2024-11-08 03:59:48.876449] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:14.001 03:59:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.001 03:59:48 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:14.001 03:59:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.001 03:59:48 -- common/autotest_common.sh@10 -- # set +x 00:16:14.001 03:59:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.001 03:59:48 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:14.001 03:59:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.001 03:59:48 -- common/autotest_common.sh@10 -- # set +x 00:16:14.001 malloc0 00:16:14.001 03:59:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.001 03:59:48 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:14.001 03:59:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.001 03:59:48 -- common/autotest_common.sh@10 -- # set +x 00:16:14.001 03:59:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.001 03:59:48 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:14.001 03:59:48 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:14.001 03:59:48 -- nvmf/common.sh@520 -- # config=() 00:16:14.001 03:59:48 -- nvmf/common.sh@520 -- # local subsystem config 00:16:14.001 03:59:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:14.001 03:59:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:14.001 { 00:16:14.001 "params": { 00:16:14.001 "name": "Nvme$subsystem", 00:16:14.001 "trtype": "$TEST_TRANSPORT", 00:16:14.001 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:14.001 "adrfam": "ipv4", 00:16:14.001 "trsvcid": "$NVMF_PORT", 00:16:14.001 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:14.001 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:14.001 "hdgst": ${hdgst:-false}, 00:16:14.001 "ddgst": ${ddgst:-false} 00:16:14.001 }, 00:16:14.001 "method": "bdev_nvme_attach_controller" 00:16:14.001 } 00:16:14.001 EOF 00:16:14.001 )") 00:16:14.001 03:59:48 -- nvmf/common.sh@542 -- # cat 00:16:14.001 03:59:48 -- nvmf/common.sh@544 -- # jq . 00:16:14.001 03:59:48 -- nvmf/common.sh@545 -- # IFS=, 00:16:14.001 03:59:48 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:14.001 "params": { 00:16:14.001 "name": "Nvme1", 00:16:14.001 "trtype": "tcp", 00:16:14.001 "traddr": "10.0.0.2", 00:16:14.001 "adrfam": "ipv4", 00:16:14.001 "trsvcid": "4420", 00:16:14.001 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:14.001 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:14.001 "hdgst": false, 00:16:14.001 "ddgst": false 00:16:14.001 }, 00:16:14.001 "method": "bdev_nvme_attach_controller" 00:16:14.001 }' 00:16:14.001 [2024-11-08 03:59:48.969574] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:14.001 [2024-11-08 03:59:48.969661] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75603 ] 00:16:14.260 [2024-11-08 03:59:49.112243] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.260 [2024-11-08 03:59:49.215108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.518 Running I/O for 10 seconds... 00:16:24.501 00:16:24.502 Latency(us) 00:16:24.502 [2024-11-08T03:59:59.613Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.502 [2024-11-08T03:59:59.613Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:24.502 Verification LBA range: start 0x0 length 0x1000 00:16:24.502 Nvme1n1 : 10.01 11194.85 87.46 0.00 0.00 11406.47 1139.43 19660.80 00:16:24.502 [2024-11-08T03:59:59.613Z] =================================================================================================================== 00:16:24.502 [2024-11-08T03:59:59.613Z] Total : 11194.85 87.46 0.00 0.00 11406.47 1139.43 19660.80 00:16:24.761 03:59:59 -- target/zcopy.sh@39 -- # perfpid=75715 00:16:24.761 03:59:59 -- target/zcopy.sh@41 -- # xtrace_disable 00:16:24.761 03:59:59 -- common/autotest_common.sh@10 -- # set +x 00:16:24.761 03:59:59 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:24.761 03:59:59 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:24.761 03:59:59 -- nvmf/common.sh@520 -- # config=() 00:16:24.761 03:59:59 -- nvmf/common.sh@520 -- # local subsystem config 00:16:24.761 03:59:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:24.761 03:59:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:24.761 { 00:16:24.761 "params": { 00:16:24.761 "name": "Nvme$subsystem", 00:16:24.761 "trtype": "$TEST_TRANSPORT", 00:16:24.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:24.761 "adrfam": "ipv4", 00:16:24.761 "trsvcid": "$NVMF_PORT", 00:16:24.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:24.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:24.761 "hdgst": ${hdgst:-false}, 00:16:24.761 "ddgst": ${ddgst:-false} 00:16:24.761 }, 00:16:24.761 "method": "bdev_nvme_attach_controller" 00:16:24.761 } 00:16:24.761 EOF 00:16:24.761 )") 00:16:24.761 03:59:59 -- nvmf/common.sh@542 -- # cat 00:16:24.761 [2024-11-08 03:59:59.739277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.761 [2024-11-08 03:59:59.739338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.761 03:59:59 -- nvmf/common.sh@544 -- # jq . 00:16:24.761 2024/11/08 03:59:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:24.761 03:59:59 -- nvmf/common.sh@545 -- # IFS=, 00:16:24.761 [2024-11-08 03:59:59.747240] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.761 [2024-11-08 03:59:59.747261] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.761 03:59:59 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:24.761 "params": { 00:16:24.761 "name": "Nvme1", 00:16:24.761 "trtype": "tcp", 00:16:24.761 "traddr": "10.0.0.2", 00:16:24.761 "adrfam": "ipv4", 00:16:24.761 "trsvcid": "4420", 00:16:24.761 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:24.761 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:24.761 "hdgst": false, 00:16:24.761 "ddgst": false 00:16:24.761 }, 00:16:24.761 "method": "bdev_nvme_attach_controller" 00:16:24.761 }' 00:16:24.761 2024/11/08 03:59:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:24.761 [2024-11-08 03:59:59.755236] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.761 [2024-11-08 03:59:59.755265] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.761 2024/11/08 03:59:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:24.761 [2024-11-08 03:59:59.767241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.761 [2024-11-08 03:59:59.767268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.761 2024/11/08 03:59:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:24.761 [2024-11-08 03:59:59.777925] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:24.761 [2024-11-08 03:59:59.778167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75715 ] 00:16:24.761 [2024-11-08 03:59:59.779244] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.761 [2024-11-08 03:59:59.779266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.761 2024/11/08 03:59:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:24.761 [2024-11-08 03:59:59.791245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.761 [2024-11-08 03:59:59.791407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.761 2024/11/08 03:59:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:24.761 [2024-11-08 03:59:59.803257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.761 [2024-11-08 03:59:59.803407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.761 2024/11/08 03:59:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:24.761 [2024-11-08 03:59:59.815256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.761 [2024-11-08 03:59:59.815395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.761 2024/11/08 03:59:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:24.761 [2024-11-08 03:59:59.827258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.761 [2024-11-08 03:59:59.827405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.761 2024/11/08 03:59:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:24.761 [2024-11-08 03:59:59.839261] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.761 [2024-11-08 03:59:59.839399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.761 2024/11/08 03:59:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:24.761 [2024-11-08 03:59:59.851263] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.761 [2024-11-08 03:59:59.851404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.761 2024/11/08 03:59:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:24.761 [2024-11-08 03:59:59.863265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.761 [2024-11-08 03:59:59.863403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.761 2024/11/08 03:59:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.021 [2024-11-08 03:59:59.875268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.021 [2024-11-08 03:59:59.875405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.021 2024/11/08 03:59:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.021 [2024-11-08 03:59:59.887269] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.021 [2024-11-08 03:59:59.887297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.021 2024/11/08 03:59:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.021 [2024-11-08 03:59:59.899268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.021 [2024-11-08 03:59:59.899295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.021 2024/11/08 03:59:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.021 [2024-11-08 03:59:59.911062] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.021 [2024-11-08 03:59:59.911272] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.021 [2024-11-08 03:59:59.911286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.021 2024/11/08 03:59:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.021 [2024-11-08 03:59:59.923271] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.021 [2024-11-08 03:59:59.923296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.021 2024/11/08 03:59:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.021 [2024-11-08 03:59:59.935280] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.021 [2024-11-08 03:59:59.935308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.021 2024/11/08 03:59:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.021 [2024-11-08 03:59:59.947282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.021 [2024-11-08 03:59:59.947306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.021 2024/11/08 03:59:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.021 [2024-11-08 03:59:59.959281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.021 [2024-11-08 03:59:59.959306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.021 2024/11/08 03:59:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.021 [2024-11-08 03:59:59.971283] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.021 [2024-11-08 03:59:59.971307] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.021 2024/11/08 03:59:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.021 [2024-11-08 03:59:59.983285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.021 [2024-11-08 03:59:59.983309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.021 2024/11/08 03:59:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.021 [2024-11-08 03:59:59.993936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.021 [2024-11-08 03:59:59.995289] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.021 [2024-11-08 03:59:59.995312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.021 2024/11/08 03:59:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.021 [2024-11-08 04:00:00.007292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.021 [2024-11-08 04:00:00.007316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.021 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.021 [2024-11-08 04:00:00.019300] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.021 [2024-11-08 04:00:00.019326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.021 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.021 [2024-11-08 04:00:00.031300] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.021 [2024-11-08 04:00:00.031325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.021 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.021 [2024-11-08 04:00:00.043306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.021 [2024-11-08 04:00:00.043331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.021 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.021 [2024-11-08 04:00:00.055304] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.021 [2024-11-08 04:00:00.055329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.022 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.022 [2024-11-08 04:00:00.067308] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.022 [2024-11-08 04:00:00.067332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.022 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.022 [2024-11-08 04:00:00.079311] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.022 [2024-11-08 04:00:00.079335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.022 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.022 [2024-11-08 04:00:00.091314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.022 [2024-11-08 04:00:00.091339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.022 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.022 [2024-11-08 04:00:00.103320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.022 [2024-11-08 04:00:00.103346] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.022 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.022 [2024-11-08 04:00:00.111317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.022 [2024-11-08 04:00:00.111342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.022 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.022 [2024-11-08 04:00:00.119316] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.022 [2024-11-08 04:00:00.119341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.022 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.286 [2024-11-08 04:00:00.131327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.286 [2024-11-08 04:00:00.131352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.286 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.286 [2024-11-08 04:00:00.143358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.286 [2024-11-08 04:00:00.143390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.286 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.286 [2024-11-08 04:00:00.155352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.286 [2024-11-08 04:00:00.155381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.286 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.286 [2024-11-08 04:00:00.167357] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.286 [2024-11-08 04:00:00.167387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.286 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.286 [2024-11-08 04:00:00.179355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.286 [2024-11-08 04:00:00.179383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.286 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.286 [2024-11-08 04:00:00.191356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.286 [2024-11-08 04:00:00.191384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.286 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.286 [2024-11-08 04:00:00.203366] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.286 [2024-11-08 04:00:00.203396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.286 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.286 Running I/O for 5 seconds... 00:16:25.286 [2024-11-08 04:00:00.221040] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.286 [2024-11-08 04:00:00.221079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.286 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.286 [2024-11-08 04:00:00.237974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.286 [2024-11-08 04:00:00.238020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.286 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.286 [2024-11-08 04:00:00.254022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.286 [2024-11-08 04:00:00.254055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.286 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.286 [2024-11-08 04:00:00.270926] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.286 [2024-11-08 04:00:00.270957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.286 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.286 [2024-11-08 04:00:00.288093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.286 [2024-11-08 04:00:00.288125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.286 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.286 [2024-11-08 04:00:00.304495] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.286 [2024-11-08 04:00:00.304526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.286 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.286 [2024-11-08 04:00:00.320522] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.286 [2024-11-08 04:00:00.320553] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.286 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.286 [2024-11-08 04:00:00.337040] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.286 [2024-11-08 04:00:00.337087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.286 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.286 [2024-11-08 04:00:00.353258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.286 [2024-11-08 04:00:00.353292] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.286 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.286 [2024-11-08 04:00:00.370578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.286 [2024-11-08 04:00:00.370625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.287 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.287 [2024-11-08 04:00:00.387703] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.287 [2024-11-08 04:00:00.387736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.287 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.547 [2024-11-08 04:00:00.403282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.547 [2024-11-08 04:00:00.403313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.547 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.547 [2024-11-08 04:00:00.420148] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.547 [2024-11-08 04:00:00.420178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.547 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.547 [2024-11-08 04:00:00.437084] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.547 [2024-11-08 04:00:00.437116] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.547 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.547 [2024-11-08 04:00:00.453818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.547 [2024-11-08 04:00:00.453868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.547 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.547 [2024-11-08 04:00:00.470832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.547 [2024-11-08 04:00:00.470880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.547 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.547 [2024-11-08 04:00:00.487096] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.547 [2024-11-08 04:00:00.487128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.547 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.547 [2024-11-08 04:00:00.504043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.547 [2024-11-08 04:00:00.504075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.547 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.547 [2024-11-08 04:00:00.520773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.547 [2024-11-08 04:00:00.520821] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.547 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.548 [2024-11-08 04:00:00.536753] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.548 [2024-11-08 04:00:00.536786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.548 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.548 [2024-11-08 04:00:00.554221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.548 [2024-11-08 04:00:00.554254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.548 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.548 [2024-11-08 04:00:00.569817] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.548 [2024-11-08 04:00:00.569850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.548 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.548 [2024-11-08 04:00:00.586524] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.548 [2024-11-08 04:00:00.586570] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.548 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.548 [2024-11-08 04:00:00.603232] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.548 [2024-11-08 04:00:00.603279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.548 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.548 [2024-11-08 04:00:00.620419] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.548 [2024-11-08 04:00:00.620463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.548 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.548 [2024-11-08 04:00:00.637798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.548 [2024-11-08 04:00:00.637846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.548 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.548 [2024-11-08 04:00:00.654970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.548 [2024-11-08 04:00:00.655018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.807 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.807 [2024-11-08 04:00:00.669400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.807 [2024-11-08 04:00:00.669445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.807 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.807 [2024-11-08 04:00:00.685207] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.807 [2024-11-08 04:00:00.685255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.807 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.807 [2024-11-08 04:00:00.702293] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.807 [2024-11-08 04:00:00.702338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.807 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.807 [2024-11-08 04:00:00.718929] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.807 [2024-11-08 04:00:00.718976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.807 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.807 [2024-11-08 04:00:00.735827] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.807 [2024-11-08 04:00:00.735860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.807 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.807 [2024-11-08 04:00:00.752004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.807 [2024-11-08 04:00:00.752051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.807 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.807 [2024-11-08 04:00:00.768779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.807 [2024-11-08 04:00:00.768813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.807 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.807 [2024-11-08 04:00:00.785877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.807 [2024-11-08 04:00:00.785937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.807 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.807 [2024-11-08 04:00:00.802250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.807 [2024-11-08 04:00:00.802282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.807 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.807 [2024-11-08 04:00:00.819292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.807 [2024-11-08 04:00:00.819324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.807 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.807 [2024-11-08 04:00:00.835365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.807 [2024-11-08 04:00:00.835397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.807 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.807 [2024-11-08 04:00:00.852330] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.807 [2024-11-08 04:00:00.852363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.807 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.807 [2024-11-08 04:00:00.867976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.807 [2024-11-08 04:00:00.868009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.807 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.807 [2024-11-08 04:00:00.884392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.807 [2024-11-08 04:00:00.884451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.807 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.807 [2024-11-08 04:00:00.901561] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.807 [2024-11-08 04:00:00.901609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.807 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.066 [2024-11-08 04:00:00.919322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.066 [2024-11-08 04:00:00.919370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.066 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.066 [2024-11-08 04:00:00.934483] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.066 [2024-11-08 04:00:00.934515] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.066 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.066 [2024-11-08 04:00:00.951376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.066 [2024-11-08 04:00:00.951446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.066 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.066 [2024-11-08 04:00:00.967552] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.066 [2024-11-08 04:00:00.967612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.066 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.066 [2024-11-08 04:00:00.983121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.066 [2024-11-08 04:00:00.983168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.066 2024/11/08 04:00:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.066 [2024-11-08 04:00:00.999050] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.066 [2024-11-08 04:00:00.999083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.066 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.067 [2024-11-08 04:00:01.015623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.067 [2024-11-08 04:00:01.015656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.067 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.067 [2024-11-08 04:00:01.032630] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.067 [2024-11-08 04:00:01.032662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.067 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.067 [2024-11-08 04:00:01.049390] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.067 [2024-11-08 04:00:01.049472] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.067 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.067 [2024-11-08 04:00:01.065625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.067 [2024-11-08 04:00:01.065658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.067 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.067 [2024-11-08 04:00:01.082080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.067 [2024-11-08 04:00:01.082113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.067 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.067 [2024-11-08 04:00:01.098700] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.067 [2024-11-08 04:00:01.098761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.067 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.067 [2024-11-08 04:00:01.116083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.067 [2024-11-08 04:00:01.116116] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.067 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.067 [2024-11-08 04:00:01.130903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.067 [2024-11-08 04:00:01.130934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.067 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.067 [2024-11-08 04:00:01.146285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.067 [2024-11-08 04:00:01.146331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.067 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.067 [2024-11-08 04:00:01.163481] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.067 [2024-11-08 04:00:01.163513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.067 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.326 [2024-11-08 04:00:01.179536] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.326 [2024-11-08 04:00:01.179569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.326 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.326 [2024-11-08 04:00:01.196467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.326 [2024-11-08 04:00:01.196500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.326 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.326 [2024-11-08 04:00:01.212445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.326 [2024-11-08 04:00:01.212491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.326 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.326 [2024-11-08 04:00:01.230158] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.326 [2024-11-08 04:00:01.230206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.326 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.326 [2024-11-08 04:00:01.244006] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.326 [2024-11-08 04:00:01.244053] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.326 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.326 [2024-11-08 04:00:01.259983] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.326 [2024-11-08 04:00:01.260015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.326 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.326 [2024-11-08 04:00:01.276485] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.326 [2024-11-08 04:00:01.276545] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.326 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.326 [2024-11-08 04:00:01.293402] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.326 [2024-11-08 04:00:01.293486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.326 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.326 [2024-11-08 04:00:01.309175] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.326 [2024-11-08 04:00:01.309222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.326 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.326 [2024-11-08 04:00:01.320756] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.326 [2024-11-08 04:00:01.320789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.326 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.326 [2024-11-08 04:00:01.337192] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.326 [2024-11-08 04:00:01.337241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.326 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.326 [2024-11-08 04:00:01.353374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.326 [2024-11-08 04:00:01.353407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.326 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.326 [2024-11-08 04:00:01.369580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.326 [2024-11-08 04:00:01.369631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.326 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.326 [2024-11-08 04:00:01.387507] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.326 [2024-11-08 04:00:01.387558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.326 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.326 [2024-11-08 04:00:01.403826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.326 [2024-11-08 04:00:01.403857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.326 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.326 [2024-11-08 04:00:01.420591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.326 [2024-11-08 04:00:01.420625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.326 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.584 [2024-11-08 04:00:01.438050] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.585 [2024-11-08 04:00:01.438096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.585 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.585 [2024-11-08 04:00:01.453835] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.585 [2024-11-08 04:00:01.453882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.585 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.585 [2024-11-08 04:00:01.471333] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.585 [2024-11-08 04:00:01.471366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.585 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.585 [2024-11-08 04:00:01.486820] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.585 [2024-11-08 04:00:01.486858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.585 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.585 [2024-11-08 04:00:01.504087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.585 [2024-11-08 04:00:01.504121] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.585 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.585 [2024-11-08 04:00:01.520125] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.585 [2024-11-08 04:00:01.520158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.585 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.585 [2024-11-08 04:00:01.537397] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.585 [2024-11-08 04:00:01.537441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.585 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.585 [2024-11-08 04:00:01.554395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.585 [2024-11-08 04:00:01.554452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.585 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.585 [2024-11-08 04:00:01.570922] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.585 [2024-11-08 04:00:01.570957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.585 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.585 [2024-11-08 04:00:01.586436] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.585 [2024-11-08 04:00:01.586491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.585 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.585 [2024-11-08 04:00:01.597574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.585 [2024-11-08 04:00:01.597611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.585 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.585 [2024-11-08 04:00:01.614284] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.585 [2024-11-08 04:00:01.614317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.585 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.585 [2024-11-08 04:00:01.628717] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.585 [2024-11-08 04:00:01.628750] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.585 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.585 [2024-11-08 04:00:01.646464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.585 [2024-11-08 04:00:01.646512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.585 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.585 [2024-11-08 04:00:01.660041] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.585 [2024-11-08 04:00:01.660074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.585 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.585 [2024-11-08 04:00:01.676030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.585 [2024-11-08 04:00:01.676062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.585 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.844 [2024-11-08 04:00:01.693551] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.844 [2024-11-08 04:00:01.693587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.844 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.844 [2024-11-08 04:00:01.708655] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.844 [2024-11-08 04:00:01.708703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.844 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.844 [2024-11-08 04:00:01.720411] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.844 [2024-11-08 04:00:01.720457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.844 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.844 [2024-11-08 04:00:01.736808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.844 [2024-11-08 04:00:01.736857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.844 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.844 [2024-11-08 04:00:01.753270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.844 [2024-11-08 04:00:01.753304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.844 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.844 [2024-11-08 04:00:01.770314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.844 [2024-11-08 04:00:01.770356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.844 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.844 [2024-11-08 04:00:01.786532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.844 [2024-11-08 04:00:01.786581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.844 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.844 [2024-11-08 04:00:01.802800] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.844 [2024-11-08 04:00:01.802835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.844 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.844 [2024-11-08 04:00:01.819101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.844 [2024-11-08 04:00:01.819149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.844 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.844 [2024-11-08 04:00:01.837065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.844 [2024-11-08 04:00:01.837098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.844 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.844 [2024-11-08 04:00:01.851381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.844 [2024-11-08 04:00:01.851437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.844 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.844 [2024-11-08 04:00:01.866882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.844 [2024-11-08 04:00:01.866915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.844 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.845 [2024-11-08 04:00:01.884751] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.845 [2024-11-08 04:00:01.884800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.845 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.845 [2024-11-08 04:00:01.899322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.845 [2024-11-08 04:00:01.899370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.845 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.845 [2024-11-08 04:00:01.919610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.845 [2024-11-08 04:00:01.919657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.845 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.845 [2024-11-08 04:00:01.935989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.845 [2024-11-08 04:00:01.936022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.845 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.104 [2024-11-08 04:00:01.953059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.104 [2024-11-08 04:00:01.953110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.104 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.104 [2024-11-08 04:00:01.969164] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.104 [2024-11-08 04:00:01.969198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.104 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.104 [2024-11-08 04:00:01.987338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.104 [2024-11-08 04:00:01.987371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.104 2024/11/08 04:00:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.104 [2024-11-08 04:00:02.001982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.104 [2024-11-08 04:00:02.002028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.104 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.104 [2024-11-08 04:00:02.018435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.104 [2024-11-08 04:00:02.018480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.104 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.104 [2024-11-08 04:00:02.034579] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.104 [2024-11-08 04:00:02.034611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.104 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.104 [2024-11-08 04:00:02.050418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.105 [2024-11-08 04:00:02.050473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.105 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.105 [2024-11-08 04:00:02.068085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.105 [2024-11-08 04:00:02.068117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.105 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.105 [2024-11-08 04:00:02.082986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.105 [2024-11-08 04:00:02.083032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.105 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.105 [2024-11-08 04:00:02.095878] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.105 [2024-11-08 04:00:02.095927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.105 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.105 [2024-11-08 04:00:02.113703] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.105 [2024-11-08 04:00:02.113769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.105 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.105 [2024-11-08 04:00:02.128793] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.105 [2024-11-08 04:00:02.128840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.105 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.105 [2024-11-08 04:00:02.140168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.105 [2024-11-08 04:00:02.140201] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.105 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.105 [2024-11-08 04:00:02.156263] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.105 [2024-11-08 04:00:02.156310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.105 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.105 [2024-11-08 04:00:02.173008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.105 [2024-11-08 04:00:02.173040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.105 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.105 [2024-11-08 04:00:02.189602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.105 [2024-11-08 04:00:02.189652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.105 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.105 [2024-11-08 04:00:02.205848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.105 [2024-11-08 04:00:02.205896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.105 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.364 [2024-11-08 04:00:02.221325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.364 [2024-11-08 04:00:02.221372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.364 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.364 [2024-11-08 04:00:02.236992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.364 [2024-11-08 04:00:02.237040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.364 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.364 [2024-11-08 04:00:02.254250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.364 [2024-11-08 04:00:02.254296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.364 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.364 [2024-11-08 04:00:02.270721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.364 [2024-11-08 04:00:02.270770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.364 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.364 [2024-11-08 04:00:02.287997] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.364 [2024-11-08 04:00:02.288044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.364 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.364 [2024-11-08 04:00:02.303621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.364 [2024-11-08 04:00:02.303667] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.364 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.364 [2024-11-08 04:00:02.318559] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.364 [2024-11-08 04:00:02.318612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.364 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.364 [2024-11-08 04:00:02.336154] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.364 [2024-11-08 04:00:02.336203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.365 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.365 [2024-11-08 04:00:02.352325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.365 [2024-11-08 04:00:02.352373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.365 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.365 [2024-11-08 04:00:02.369670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.365 [2024-11-08 04:00:02.369705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.365 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.365 [2024-11-08 04:00:02.385909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.365 [2024-11-08 04:00:02.385957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.365 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.365 [2024-11-08 04:00:02.403151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.365 [2024-11-08 04:00:02.403199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.365 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.365 [2024-11-08 04:00:02.419164] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.365 [2024-11-08 04:00:02.419215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.365 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.365 [2024-11-08 04:00:02.436376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.365 [2024-11-08 04:00:02.436488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.365 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.365 [2024-11-08 04:00:02.452314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.365 [2024-11-08 04:00:02.452365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.365 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.365 [2024-11-08 04:00:02.469685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.365 [2024-11-08 04:00:02.469722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.624 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.624 [2024-11-08 04:00:02.484957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.624 [2024-11-08 04:00:02.485003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.624 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.625 [2024-11-08 04:00:02.502009] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.625 [2024-11-08 04:00:02.502055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.625 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.625 [2024-11-08 04:00:02.516910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.625 [2024-11-08 04:00:02.516957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.625 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.625 [2024-11-08 04:00:02.528363] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.625 [2024-11-08 04:00:02.528396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.625 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.625 [2024-11-08 04:00:02.544410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.625 [2024-11-08 04:00:02.544482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.625 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.625 [2024-11-08 04:00:02.560944] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.625 [2024-11-08 04:00:02.560976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.625 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.625 [2024-11-08 04:00:02.578762] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.625 [2024-11-08 04:00:02.578823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.625 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.625 [2024-11-08 04:00:02.595115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.625 [2024-11-08 04:00:02.595147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.625 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.625 [2024-11-08 04:00:02.612400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.625 [2024-11-08 04:00:02.612441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.625 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.625 [2024-11-08 04:00:02.628154] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.625 [2024-11-08 04:00:02.628186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.625 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.625 [2024-11-08 04:00:02.645433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.625 [2024-11-08 04:00:02.645489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.625 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.625 [2024-11-08 04:00:02.660587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.625 [2024-11-08 04:00:02.660620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.625 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.625 [2024-11-08 04:00:02.676338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.625 [2024-11-08 04:00:02.676370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.625 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.625 [2024-11-08 04:00:02.692809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.625 [2024-11-08 04:00:02.692839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.625 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.625 [2024-11-08 04:00:02.708963] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.625 [2024-11-08 04:00:02.708995] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.625 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.625 [2024-11-08 04:00:02.725222] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.625 [2024-11-08 04:00:02.725253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.625 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.884 [2024-11-08 04:00:02.742538] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.884 [2024-11-08 04:00:02.742570] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.884 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.884 [2024-11-08 04:00:02.759407] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.884 [2024-11-08 04:00:02.759467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.884 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.884 [2024-11-08 04:00:02.776035] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.884 [2024-11-08 04:00:02.776067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.884 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.884 [2024-11-08 04:00:02.793394] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.884 [2024-11-08 04:00:02.793507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.884 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.884 [2024-11-08 04:00:02.807940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.884 [2024-11-08 04:00:02.807972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.884 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.884 [2024-11-08 04:00:02.823882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.884 [2024-11-08 04:00:02.823914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.884 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.884 [2024-11-08 04:00:02.840370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.884 [2024-11-08 04:00:02.840402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.884 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.884 [2024-11-08 04:00:02.857026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.884 [2024-11-08 04:00:02.857058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.884 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.884 [2024-11-08 04:00:02.874262] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.884 [2024-11-08 04:00:02.874293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.884 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.884 [2024-11-08 04:00:02.890799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.885 [2024-11-08 04:00:02.890830] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.885 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.885 [2024-11-08 04:00:02.907386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.885 [2024-11-08 04:00:02.907458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.885 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.885 [2024-11-08 04:00:02.923993] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.885 [2024-11-08 04:00:02.924025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.885 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.885 [2024-11-08 04:00:02.940393] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.885 [2024-11-08 04:00:02.940435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.885 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.885 [2024-11-08 04:00:02.957252] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.885 [2024-11-08 04:00:02.957286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.885 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.885 [2024-11-08 04:00:02.973705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.885 [2024-11-08 04:00:02.973771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.885 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.885 [2024-11-08 04:00:02.990907] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.885 [2024-11-08 04:00:02.990955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.144 2024/11/08 04:00:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.144 [2024-11-08 04:00:03.005604] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.144 [2024-11-08 04:00:03.005639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.144 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.144 [2024-11-08 04:00:03.021699] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.144 [2024-11-08 04:00:03.021732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.144 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.144 [2024-11-08 04:00:03.038046] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.144 [2024-11-08 04:00:03.038077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.144 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.144 [2024-11-08 04:00:03.054992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.144 [2024-11-08 04:00:03.055024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.144 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.144 [2024-11-08 04:00:03.071093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.144 [2024-11-08 04:00:03.071141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.144 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.144 [2024-11-08 04:00:03.087030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.144 [2024-11-08 04:00:03.087062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.144 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.144 [2024-11-08 04:00:03.102254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.144 [2024-11-08 04:00:03.102285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.144 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.144 [2024-11-08 04:00:03.118761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.144 [2024-11-08 04:00:03.118794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.144 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.144 [2024-11-08 04:00:03.130050] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.144 [2024-11-08 04:00:03.130084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.144 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.144 [2024-11-08 04:00:03.145896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.144 [2024-11-08 04:00:03.145927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.144 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.144 [2024-11-08 04:00:03.162568] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.144 [2024-11-08 04:00:03.162601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.144 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.144 [2024-11-08 04:00:03.179923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.144 [2024-11-08 04:00:03.179955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.144 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.144 [2024-11-08 04:00:03.196631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.144 [2024-11-08 04:00:03.196663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.144 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.144 [2024-11-08 04:00:03.212956] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.144 [2024-11-08 04:00:03.212987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.144 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.144 [2024-11-08 04:00:03.229406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.144 [2024-11-08 04:00:03.229468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.144 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.144 [2024-11-08 04:00:03.244787] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.144 [2024-11-08 04:00:03.244818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.144 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.403 [2024-11-08 04:00:03.254634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.403 [2024-11-08 04:00:03.254691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.403 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.403 [2024-11-08 04:00:03.270081] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.403 [2024-11-08 04:00:03.270112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.403 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.403 [2024-11-08 04:00:03.286279] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.403 [2024-11-08 04:00:03.286314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.403 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.403 [2024-11-08 04:00:03.303106] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.403 [2024-11-08 04:00:03.303138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.403 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.403 [2024-11-08 04:00:03.319297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.403 [2024-11-08 04:00:03.319327] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.403 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.403 [2024-11-08 04:00:03.335513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.403 [2024-11-08 04:00:03.335544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.403 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.403 [2024-11-08 04:00:03.351970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.403 [2024-11-08 04:00:03.352001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.403 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.403 [2024-11-08 04:00:03.368435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.403 [2024-11-08 04:00:03.368466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.404 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.404 [2024-11-08 04:00:03.384302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.404 [2024-11-08 04:00:03.384336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.404 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.404 [2024-11-08 04:00:03.395972] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.404 [2024-11-08 04:00:03.396003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.404 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.404 [2024-11-08 04:00:03.411616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.404 [2024-11-08 04:00:03.411648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.404 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.404 [2024-11-08 04:00:03.427691] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.404 [2024-11-08 04:00:03.427722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.404 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.404 [2024-11-08 04:00:03.444079] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.404 [2024-11-08 04:00:03.444111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.404 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.404 [2024-11-08 04:00:03.460822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.404 [2024-11-08 04:00:03.460853] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.404 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.404 [2024-11-08 04:00:03.477002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.404 [2024-11-08 04:00:03.477034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.404 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.404 [2024-11-08 04:00:03.493600] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.404 [2024-11-08 04:00:03.493635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.404 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.404 [2024-11-08 04:00:03.510634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.404 [2024-11-08 04:00:03.510664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.662 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.662 [2024-11-08 04:00:03.521593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.662 [2024-11-08 04:00:03.521635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.662 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.662 [2024-11-08 04:00:03.537573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.662 [2024-11-08 04:00:03.537621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.662 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.662 [2024-11-08 04:00:03.553735] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.662 [2024-11-08 04:00:03.553770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.662 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.662 [2024-11-08 04:00:03.570420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.662 [2024-11-08 04:00:03.570505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.662 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.662 [2024-11-08 04:00:03.588461] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.662 [2024-11-08 04:00:03.588506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.662 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.662 [2024-11-08 04:00:03.603671] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.662 [2024-11-08 04:00:03.603717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.662 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.662 [2024-11-08 04:00:03.615373] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.662 [2024-11-08 04:00:03.615403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.663 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.663 [2024-11-08 04:00:03.631121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.663 [2024-11-08 04:00:03.631153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.663 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.663 [2024-11-08 04:00:03.642001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.663 [2024-11-08 04:00:03.642032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.663 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.663 [2024-11-08 04:00:03.657210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.663 [2024-11-08 04:00:03.657241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.663 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.663 [2024-11-08 04:00:03.673176] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.663 [2024-11-08 04:00:03.673208] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.663 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.663 [2024-11-08 04:00:03.689966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.663 [2024-11-08 04:00:03.689996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.663 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.663 [2024-11-08 04:00:03.700550] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.663 [2024-11-08 04:00:03.700581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.663 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.663 [2024-11-08 04:00:03.715799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.663 [2024-11-08 04:00:03.715829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.663 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.663 [2024-11-08 04:00:03.732331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.663 [2024-11-08 04:00:03.732363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.663 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.663 [2024-11-08 04:00:03.748583] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.663 [2024-11-08 04:00:03.748614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.663 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.663 [2024-11-08 04:00:03.764808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.663 [2024-11-08 04:00:03.764839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.663 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.922 [2024-11-08 04:00:03.782042] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.922 [2024-11-08 04:00:03.782075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.922 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.922 [2024-11-08 04:00:03.798252] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.922 [2024-11-08 04:00:03.798283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.922 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.922 [2024-11-08 04:00:03.814465] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.922 [2024-11-08 04:00:03.814508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.922 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.922 [2024-11-08 04:00:03.830554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.922 [2024-11-08 04:00:03.830585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.922 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.922 [2024-11-08 04:00:03.847434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.922 [2024-11-08 04:00:03.847464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.922 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.922 [2024-11-08 04:00:03.863526] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.922 [2024-11-08 04:00:03.863557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.922 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.922 [2024-11-08 04:00:03.880034] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.922 [2024-11-08 04:00:03.880065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.922 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.922 [2024-11-08 04:00:03.896675] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.922 [2024-11-08 04:00:03.896706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.922 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.922 [2024-11-08 04:00:03.912447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.922 [2024-11-08 04:00:03.912478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.922 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.922 [2024-11-08 04:00:03.928386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.922 [2024-11-08 04:00:03.928442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.922 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.922 [2024-11-08 04:00:03.939645] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.922 [2024-11-08 04:00:03.939676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.922 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.922 [2024-11-08 04:00:03.955365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.922 [2024-11-08 04:00:03.955396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.922 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.922 [2024-11-08 04:00:03.971166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.922 [2024-11-08 04:00:03.971197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.922 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.922 [2024-11-08 04:00:03.988214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.922 [2024-11-08 04:00:03.988245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.922 2024/11/08 04:00:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.922 [2024-11-08 04:00:04.004210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.922 [2024-11-08 04:00:04.004242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.922 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.922 [2024-11-08 04:00:04.021198] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.923 [2024-11-08 04:00:04.021246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.923 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.182 [2024-11-08 04:00:04.039337] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.182 [2024-11-08 04:00:04.039368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.182 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.182 [2024-11-08 04:00:04.054676] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.182 [2024-11-08 04:00:04.054724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.182 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.182 [2024-11-08 04:00:04.071628] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.182 [2024-11-08 04:00:04.071659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.182 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.182 [2024-11-08 04:00:04.087813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.182 [2024-11-08 04:00:04.087846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.182 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.182 [2024-11-08 04:00:04.104246] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.182 [2024-11-08 04:00:04.104278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.182 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.182 [2024-11-08 04:00:04.120706] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.182 [2024-11-08 04:00:04.120737] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.182 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.182 [2024-11-08 04:00:04.136288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.182 [2024-11-08 04:00:04.136320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.182 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.182 [2024-11-08 04:00:04.144907] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.182 [2024-11-08 04:00:04.144939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.182 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.182 [2024-11-08 04:00:04.161741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.182 [2024-11-08 04:00:04.161774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.182 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.182 [2024-11-08 04:00:04.178007] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.182 [2024-11-08 04:00:04.178038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.182 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.182 [2024-11-08 04:00:04.193974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.182 [2024-11-08 04:00:04.194005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.182 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.182 [2024-11-08 04:00:04.205678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.182 [2024-11-08 04:00:04.205711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.182 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.182 [2024-11-08 04:00:04.221900] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.182 [2024-11-08 04:00:04.221931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.182 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.182 [2024-11-08 04:00:04.237735] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.182 [2024-11-08 04:00:04.237773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.182 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.182 [2024-11-08 04:00:04.253845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.182 [2024-11-08 04:00:04.253891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.182 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.182 [2024-11-08 04:00:04.265384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.182 [2024-11-08 04:00:04.265441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.182 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.182 [2024-11-08 04:00:04.281558] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.182 [2024-11-08 04:00:04.281590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.182 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.441 [2024-11-08 04:00:04.297170] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.441 [2024-11-08 04:00:04.297202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.441 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.441 [2024-11-08 04:00:04.308778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.441 [2024-11-08 04:00:04.308810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.441 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.441 [2024-11-08 04:00:04.325266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.441 [2024-11-08 04:00:04.325298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.441 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.441 [2024-11-08 04:00:04.340270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.441 [2024-11-08 04:00:04.340301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.441 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.441 [2024-11-08 04:00:04.355255] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.441 [2024-11-08 04:00:04.355286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.441 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.441 [2024-11-08 04:00:04.371186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.441 [2024-11-08 04:00:04.371216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.442 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.442 [2024-11-08 04:00:04.388310] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.442 [2024-11-08 04:00:04.388342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.442 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.442 [2024-11-08 04:00:04.404265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.442 [2024-11-08 04:00:04.404296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.442 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.442 [2024-11-08 04:00:04.421296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.442 [2024-11-08 04:00:04.421328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.442 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.442 [2024-11-08 04:00:04.437861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.442 [2024-11-08 04:00:04.437892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.442 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.442 [2024-11-08 04:00:04.454438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.442 [2024-11-08 04:00:04.454480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.442 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.442 [2024-11-08 04:00:04.470563] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.442 [2024-11-08 04:00:04.470592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.442 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.442 [2024-11-08 04:00:04.486228] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.442 [2024-11-08 04:00:04.486259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.442 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.442 [2024-11-08 04:00:04.502595] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.442 [2024-11-08 04:00:04.502636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.442 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.442 [2024-11-08 04:00:04.518034] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.442 [2024-11-08 04:00:04.518065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.442 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.442 [2024-11-08 04:00:04.534192] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.442 [2024-11-08 04:00:04.534224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.442 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.701 [2024-11-08 04:00:04.551039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.701 [2024-11-08 04:00:04.551070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.701 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.701 [2024-11-08 04:00:04.567475] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.701 [2024-11-08 04:00:04.567505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.701 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.701 [2024-11-08 04:00:04.583839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.701 [2024-11-08 04:00:04.583870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.701 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.701 [2024-11-08 04:00:04.599638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.701 [2024-11-08 04:00:04.599669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.701 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.701 [2024-11-08 04:00:04.613709] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.701 [2024-11-08 04:00:04.613743] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.701 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.701 [2024-11-08 04:00:04.629702] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.701 [2024-11-08 04:00:04.629764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.701 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.701 [2024-11-08 04:00:04.645003] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.701 [2024-11-08 04:00:04.645034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.701 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.701 [2024-11-08 04:00:04.656208] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.701 [2024-11-08 04:00:04.656239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.701 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.701 [2024-11-08 04:00:04.671944] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.701 [2024-11-08 04:00:04.671975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.701 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.701 [2024-11-08 04:00:04.687625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.701 [2024-11-08 04:00:04.687655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.701 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.701 [2024-11-08 04:00:04.704332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.701 [2024-11-08 04:00:04.704364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.701 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.701 [2024-11-08 04:00:04.720046] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.701 [2024-11-08 04:00:04.720077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.701 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.702 [2024-11-08 04:00:04.735144] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.702 [2024-11-08 04:00:04.735191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.702 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.702 [2024-11-08 04:00:04.753561] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.702 [2024-11-08 04:00:04.753611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.702 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.702 [2024-11-08 04:00:04.767701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.702 [2024-11-08 04:00:04.767748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.702 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.702 [2024-11-08 04:00:04.784233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.702 [2024-11-08 04:00:04.784263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.702 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.702 [2024-11-08 04:00:04.800848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.702 [2024-11-08 04:00:04.800878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.702 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.961 [2024-11-08 04:00:04.817074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.961 [2024-11-08 04:00:04.817104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.961 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.961 [2024-11-08 04:00:04.828206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.961 [2024-11-08 04:00:04.828236] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.961 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.961 [2024-11-08 04:00:04.844910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.961 [2024-11-08 04:00:04.844940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.961 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.961 [2024-11-08 04:00:04.860520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.961 [2024-11-08 04:00:04.860549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.961 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.961 [2024-11-08 04:00:04.876880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.961 [2024-11-08 04:00:04.876910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.961 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.961 [2024-11-08 04:00:04.888961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.961 [2024-11-08 04:00:04.888993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.961 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.961 [2024-11-08 04:00:04.905295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.961 [2024-11-08 04:00:04.905326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.961 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.961 [2024-11-08 04:00:04.921444] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.961 [2024-11-08 04:00:04.921528] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.961 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.961 [2024-11-08 04:00:04.936790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.961 [2024-11-08 04:00:04.936821] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.961 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.961 [2024-11-08 04:00:04.950743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.961 [2024-11-08 04:00:04.950773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.961 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.961 [2024-11-08 04:00:04.966564] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.961 [2024-11-08 04:00:04.966593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.961 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.961 [2024-11-08 04:00:04.982742] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.961 [2024-11-08 04:00:04.982772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.961 2024/11/08 04:00:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.961 [2024-11-08 04:00:04.999205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.961 [2024-11-08 04:00:04.999235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.961 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.961 [2024-11-08 04:00:05.015621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.961 [2024-11-08 04:00:05.015651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.961 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.961 [2024-11-08 04:00:05.031245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.961 [2024-11-08 04:00:05.031276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.961 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.961 [2024-11-08 04:00:05.045288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.961 [2024-11-08 04:00:05.045335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.961 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.961 [2024-11-08 04:00:05.060572] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.961 [2024-11-08 04:00:05.060605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.961 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.221 [2024-11-08 04:00:05.072202] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-11-08 04:00:05.072250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.221 [2024-11-08 04:00:05.088628] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-11-08 04:00:05.088676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.221 [2024-11-08 04:00:05.105631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-11-08 04:00:05.105664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.221 [2024-11-08 04:00:05.121147] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-11-08 04:00:05.121178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.221 [2024-11-08 04:00:05.136904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-11-08 04:00:05.136935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.221 [2024-11-08 04:00:05.151526] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-11-08 04:00:05.151555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.221 [2024-11-08 04:00:05.167501] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-11-08 04:00:05.167530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.221 [2024-11-08 04:00:05.179133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-11-08 04:00:05.179163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.221 [2024-11-08 04:00:05.195314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-11-08 04:00:05.195345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.221 [2024-11-08 04:00:05.210637] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-11-08 04:00:05.210668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.221 [2024-11-08 04:00:05.218755] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-11-08 04:00:05.218782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 00:16:30.221 Latency(us) 00:16:30.221 [2024-11-08T04:00:05.332Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:30.221 [2024-11-08T04:00:05.332Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:30.221 Nvme1n1 : 5.01 13848.25 108.19 0.00 0.00 9233.81 2338.44 17158.52 00:16:30.221 [2024-11-08T04:00:05.332Z] =================================================================================================================== 00:16:30.221 [2024-11-08T04:00:05.332Z] Total : 13848.25 108.19 0.00 0.00 9233.81 2338.44 17158.52 00:16:30.221 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.221 [2024-11-08 04:00:05.230761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-11-08 04:00:05.230788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.221 [2024-11-08 04:00:05.238739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-11-08 04:00:05.238762] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.221 [2024-11-08 04:00:05.250745] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-11-08 04:00:05.250768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.221 [2024-11-08 04:00:05.258747] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-11-08 04:00:05.258786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.221 [2024-11-08 04:00:05.270750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-11-08 04:00:05.270772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.221 [2024-11-08 04:00:05.282751] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-11-08 04:00:05.282774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.221 [2024-11-08 04:00:05.294755] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-11-08 04:00:05.294778] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.221 [2024-11-08 04:00:05.306758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-11-08 04:00:05.306781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.221 [2024-11-08 04:00:05.318759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.221 [2024-11-08 04:00:05.318781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.221 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.481 [2024-11-08 04:00:05.330760] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.481 [2024-11-08 04:00:05.330782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.481 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.481 [2024-11-08 04:00:05.342767] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.481 [2024-11-08 04:00:05.342807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.481 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.481 [2024-11-08 04:00:05.354766] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.481 [2024-11-08 04:00:05.354791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.481 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.481 [2024-11-08 04:00:05.366769] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.481 [2024-11-08 04:00:05.366792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.481 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.481 [2024-11-08 04:00:05.378771] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.481 [2024-11-08 04:00:05.378794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.481 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.481 [2024-11-08 04:00:05.390792] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.481 [2024-11-08 04:00:05.390814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.481 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.481 [2024-11-08 04:00:05.402778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.481 [2024-11-08 04:00:05.402800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.481 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.481 [2024-11-08 04:00:05.414780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.481 [2024-11-08 04:00:05.414803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.481 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.481 [2024-11-08 04:00:05.426783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.481 [2024-11-08 04:00:05.426805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.481 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.481 [2024-11-08 04:00:05.438786] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.481 [2024-11-08 04:00:05.438808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.481 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.481 [2024-11-08 04:00:05.450788] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.481 [2024-11-08 04:00:05.450810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.481 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.481 [2024-11-08 04:00:05.462790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.481 [2024-11-08 04:00:05.462812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.481 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.481 [2024-11-08 04:00:05.474793] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.481 [2024-11-08 04:00:05.474816] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.481 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.481 [2024-11-08 04:00:05.486796] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.481 [2024-11-08 04:00:05.486815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.481 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.481 [2024-11-08 04:00:05.498799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.481 [2024-11-08 04:00:05.498822] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.481 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.481 [2024-11-08 04:00:05.510804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.481 [2024-11-08 04:00:05.510827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.481 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.481 [2024-11-08 04:00:05.522824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.481 [2024-11-08 04:00:05.522848] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.481 2024/11/08 04:00:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.481 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (75715) - No such process 00:16:30.481 04:00:05 -- target/zcopy.sh@49 -- # wait 75715 00:16:30.481 04:00:05 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:30.481 04:00:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.481 04:00:05 -- common/autotest_common.sh@10 -- # set +x 00:16:30.481 04:00:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.481 04:00:05 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:30.481 04:00:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.481 04:00:05 -- common/autotest_common.sh@10 -- # set +x 00:16:30.481 delay0 00:16:30.481 04:00:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.481 04:00:05 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:30.481 04:00:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.481 04:00:05 -- common/autotest_common.sh@10 -- # set +x 00:16:30.481 04:00:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.481 04:00:05 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:30.740 [2024-11-08 04:00:05.724625] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:37.302 Initializing NVMe Controllers 00:16:37.302 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:37.302 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:37.302 Initialization complete. Launching workers. 00:16:37.302 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 149 00:16:37.302 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 436, failed to submit 33 00:16:37.302 success 257, unsuccess 179, failed 0 00:16:37.302 04:00:11 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:37.302 04:00:11 -- target/zcopy.sh@60 -- # nvmftestfini 00:16:37.302 04:00:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:37.302 04:00:11 -- nvmf/common.sh@116 -- # sync 00:16:37.302 04:00:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:37.302 04:00:11 -- nvmf/common.sh@119 -- # set +e 00:16:37.302 04:00:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:37.302 04:00:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:37.302 rmmod nvme_tcp 00:16:37.302 rmmod nvme_fabrics 00:16:37.302 rmmod nvme_keyring 00:16:37.302 04:00:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:37.302 04:00:11 -- nvmf/common.sh@123 -- # set -e 00:16:37.302 04:00:11 -- nvmf/common.sh@124 -- # return 0 00:16:37.302 04:00:11 -- nvmf/common.sh@477 -- # '[' -n 75552 ']' 00:16:37.302 04:00:11 -- nvmf/common.sh@478 -- # killprocess 75552 00:16:37.302 04:00:11 -- common/autotest_common.sh@936 -- # '[' -z 75552 ']' 00:16:37.302 04:00:11 -- common/autotest_common.sh@940 -- # kill -0 75552 00:16:37.302 04:00:11 -- common/autotest_common.sh@941 -- # uname 00:16:37.302 04:00:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:37.302 04:00:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75552 00:16:37.302 killing process with pid 75552 00:16:37.302 04:00:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:37.302 04:00:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:37.302 04:00:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75552' 00:16:37.302 04:00:11 -- common/autotest_common.sh@955 -- # kill 75552 00:16:37.302 04:00:11 -- common/autotest_common.sh@960 -- # wait 75552 00:16:37.302 04:00:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:37.302 04:00:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:37.302 04:00:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:37.302 04:00:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:37.302 04:00:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:37.302 04:00:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.302 04:00:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:37.302 04:00:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.302 04:00:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:37.302 00:16:37.302 real 0m24.934s 00:16:37.302 user 0m39.115s 00:16:37.302 sys 0m7.194s 00:16:37.302 ************************************ 00:16:37.302 END TEST nvmf_zcopy 00:16:37.302 ************************************ 00:16:37.302 04:00:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:37.302 04:00:12 -- common/autotest_common.sh@10 -- # set +x 00:16:37.302 04:00:12 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:37.302 04:00:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:37.302 04:00:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:37.302 04:00:12 -- common/autotest_common.sh@10 -- # set +x 00:16:37.302 ************************************ 00:16:37.302 START TEST nvmf_nmic 00:16:37.302 ************************************ 00:16:37.302 04:00:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:37.302 * Looking for test storage... 00:16:37.302 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:37.302 04:00:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:37.302 04:00:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:37.302 04:00:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:37.302 04:00:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:37.302 04:00:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:37.302 04:00:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:37.302 04:00:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:37.302 04:00:12 -- scripts/common.sh@335 -- # IFS=.-: 00:16:37.302 04:00:12 -- scripts/common.sh@335 -- # read -ra ver1 00:16:37.302 04:00:12 -- scripts/common.sh@336 -- # IFS=.-: 00:16:37.303 04:00:12 -- scripts/common.sh@336 -- # read -ra ver2 00:16:37.303 04:00:12 -- scripts/common.sh@337 -- # local 'op=<' 00:16:37.303 04:00:12 -- scripts/common.sh@339 -- # ver1_l=2 00:16:37.303 04:00:12 -- scripts/common.sh@340 -- # ver2_l=1 00:16:37.303 04:00:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:37.303 04:00:12 -- scripts/common.sh@343 -- # case "$op" in 00:16:37.303 04:00:12 -- scripts/common.sh@344 -- # : 1 00:16:37.303 04:00:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:37.303 04:00:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:37.303 04:00:12 -- scripts/common.sh@364 -- # decimal 1 00:16:37.303 04:00:12 -- scripts/common.sh@352 -- # local d=1 00:16:37.303 04:00:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:37.303 04:00:12 -- scripts/common.sh@354 -- # echo 1 00:16:37.303 04:00:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:37.303 04:00:12 -- scripts/common.sh@365 -- # decimal 2 00:16:37.303 04:00:12 -- scripts/common.sh@352 -- # local d=2 00:16:37.562 04:00:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:37.562 04:00:12 -- scripts/common.sh@354 -- # echo 2 00:16:37.562 04:00:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:37.562 04:00:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:37.562 04:00:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:37.562 04:00:12 -- scripts/common.sh@367 -- # return 0 00:16:37.562 04:00:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:37.562 04:00:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:37.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.562 --rc genhtml_branch_coverage=1 00:16:37.562 --rc genhtml_function_coverage=1 00:16:37.562 --rc genhtml_legend=1 00:16:37.562 --rc geninfo_all_blocks=1 00:16:37.562 --rc geninfo_unexecuted_blocks=1 00:16:37.562 00:16:37.562 ' 00:16:37.562 04:00:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:37.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.562 --rc genhtml_branch_coverage=1 00:16:37.562 --rc genhtml_function_coverage=1 00:16:37.562 --rc genhtml_legend=1 00:16:37.562 --rc geninfo_all_blocks=1 00:16:37.562 --rc geninfo_unexecuted_blocks=1 00:16:37.562 00:16:37.562 ' 00:16:37.562 04:00:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:37.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.562 --rc genhtml_branch_coverage=1 00:16:37.562 --rc genhtml_function_coverage=1 00:16:37.562 --rc genhtml_legend=1 00:16:37.562 --rc geninfo_all_blocks=1 00:16:37.562 --rc geninfo_unexecuted_blocks=1 00:16:37.562 00:16:37.562 ' 00:16:37.562 04:00:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:37.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.562 --rc genhtml_branch_coverage=1 00:16:37.562 --rc genhtml_function_coverage=1 00:16:37.562 --rc genhtml_legend=1 00:16:37.562 --rc geninfo_all_blocks=1 00:16:37.562 --rc geninfo_unexecuted_blocks=1 00:16:37.562 00:16:37.562 ' 00:16:37.562 04:00:12 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:37.562 04:00:12 -- nvmf/common.sh@7 -- # uname -s 00:16:37.562 04:00:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:37.562 04:00:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:37.562 04:00:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:37.562 04:00:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:37.562 04:00:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:37.562 04:00:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:37.562 04:00:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:37.562 04:00:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:37.562 04:00:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:37.562 04:00:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:37.562 04:00:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:16:37.562 04:00:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:16:37.562 04:00:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:37.562 04:00:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:37.562 04:00:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:37.562 04:00:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:37.562 04:00:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:37.562 04:00:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:37.562 04:00:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:37.562 04:00:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.562 04:00:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.562 04:00:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.562 04:00:12 -- paths/export.sh@5 -- # export PATH 00:16:37.562 04:00:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.562 04:00:12 -- nvmf/common.sh@46 -- # : 0 00:16:37.562 04:00:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:37.562 04:00:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:37.562 04:00:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:37.562 04:00:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:37.562 04:00:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:37.562 04:00:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:37.562 04:00:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:37.562 04:00:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:37.562 04:00:12 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:37.562 04:00:12 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:37.562 04:00:12 -- target/nmic.sh@14 -- # nvmftestinit 00:16:37.562 04:00:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:37.562 04:00:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:37.562 04:00:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:37.562 04:00:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:37.562 04:00:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:37.562 04:00:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.562 04:00:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:37.562 04:00:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.562 04:00:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:37.562 04:00:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:37.562 04:00:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:37.563 04:00:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:37.563 04:00:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:37.563 04:00:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:37.563 04:00:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:37.563 04:00:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:37.563 04:00:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:37.563 04:00:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:37.563 04:00:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:37.563 04:00:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:37.563 04:00:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:37.563 04:00:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:37.563 04:00:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:37.563 04:00:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:37.563 04:00:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:37.563 04:00:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:37.563 04:00:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:37.563 04:00:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:37.563 Cannot find device "nvmf_tgt_br" 00:16:37.563 04:00:12 -- nvmf/common.sh@154 -- # true 00:16:37.563 04:00:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:37.563 Cannot find device "nvmf_tgt_br2" 00:16:37.563 04:00:12 -- nvmf/common.sh@155 -- # true 00:16:37.563 04:00:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:37.563 04:00:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:37.563 Cannot find device "nvmf_tgt_br" 00:16:37.563 04:00:12 -- nvmf/common.sh@157 -- # true 00:16:37.563 04:00:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:37.563 Cannot find device "nvmf_tgt_br2" 00:16:37.563 04:00:12 -- nvmf/common.sh@158 -- # true 00:16:37.563 04:00:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:37.563 04:00:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:37.563 04:00:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:37.563 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:37.563 04:00:12 -- nvmf/common.sh@161 -- # true 00:16:37.563 04:00:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:37.563 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:37.563 04:00:12 -- nvmf/common.sh@162 -- # true 00:16:37.563 04:00:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:37.563 04:00:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:37.563 04:00:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:37.563 04:00:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:37.563 04:00:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:37.563 04:00:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:37.563 04:00:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:37.563 04:00:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:37.563 04:00:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:37.563 04:00:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:37.563 04:00:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:37.563 04:00:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:37.563 04:00:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:37.563 04:00:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:37.563 04:00:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:37.563 04:00:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:37.821 04:00:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:37.821 04:00:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:37.821 04:00:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:37.821 04:00:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:37.821 04:00:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:37.821 04:00:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:37.821 04:00:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:37.821 04:00:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:37.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:37.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:16:37.821 00:16:37.821 --- 10.0.0.2 ping statistics --- 00:16:37.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.821 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:16:37.821 04:00:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:37.821 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:37.821 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:16:37.821 00:16:37.821 --- 10.0.0.3 ping statistics --- 00:16:37.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.821 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:16:37.821 04:00:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:37.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:37.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:16:37.821 00:16:37.821 --- 10.0.0.1 ping statistics --- 00:16:37.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.821 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:16:37.821 04:00:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:37.821 04:00:12 -- nvmf/common.sh@421 -- # return 0 00:16:37.821 04:00:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:37.821 04:00:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:37.822 04:00:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:37.822 04:00:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:37.822 04:00:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:37.822 04:00:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:37.822 04:00:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:37.822 04:00:12 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:37.822 04:00:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:37.822 04:00:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:37.822 04:00:12 -- common/autotest_common.sh@10 -- # set +x 00:16:37.822 04:00:12 -- nvmf/common.sh@469 -- # nvmfpid=76050 00:16:37.822 04:00:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:37.822 04:00:12 -- nvmf/common.sh@470 -- # waitforlisten 76050 00:16:37.822 04:00:12 -- common/autotest_common.sh@829 -- # '[' -z 76050 ']' 00:16:37.822 04:00:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.822 04:00:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:37.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.822 04:00:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.822 04:00:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:37.822 04:00:12 -- common/autotest_common.sh@10 -- # set +x 00:16:37.822 [2024-11-08 04:00:12.824694] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:37.822 [2024-11-08 04:00:12.824781] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:38.080 [2024-11-08 04:00:12.962639] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:38.080 [2024-11-08 04:00:13.049428] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:38.080 [2024-11-08 04:00:13.049771] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:38.080 [2024-11-08 04:00:13.049874] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:38.080 [2024-11-08 04:00:13.049945] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:38.080 [2024-11-08 04:00:13.050260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:38.080 [2024-11-08 04:00:13.050396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:38.080 [2024-11-08 04:00:13.050531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:38.080 [2024-11-08 04:00:13.050544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.647 04:00:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:38.647 04:00:13 -- common/autotest_common.sh@862 -- # return 0 00:16:38.647 04:00:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:38.647 04:00:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:38.647 04:00:13 -- common/autotest_common.sh@10 -- # set +x 00:16:38.906 04:00:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:38.906 04:00:13 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:38.906 04:00:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.906 04:00:13 -- common/autotest_common.sh@10 -- # set +x 00:16:38.906 [2024-11-08 04:00:13.797656] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:38.906 04:00:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.906 04:00:13 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:38.906 04:00:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.906 04:00:13 -- common/autotest_common.sh@10 -- # set +x 00:16:38.906 Malloc0 00:16:38.906 04:00:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.906 04:00:13 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:38.906 04:00:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.906 04:00:13 -- common/autotest_common.sh@10 -- # set +x 00:16:38.906 04:00:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.906 04:00:13 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:38.906 04:00:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.906 04:00:13 -- common/autotest_common.sh@10 -- # set +x 00:16:38.906 04:00:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.906 04:00:13 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:38.906 04:00:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.906 04:00:13 -- common/autotest_common.sh@10 -- # set +x 00:16:38.906 [2024-11-08 04:00:13.867770] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:38.906 04:00:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.906 04:00:13 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:38.906 test case1: single bdev can't be used in multiple subsystems 00:16:38.906 04:00:13 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:38.906 04:00:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.906 04:00:13 -- common/autotest_common.sh@10 -- # set +x 00:16:38.906 04:00:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.906 04:00:13 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:38.906 04:00:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.906 04:00:13 -- common/autotest_common.sh@10 -- # set +x 00:16:38.906 04:00:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.906 04:00:13 -- target/nmic.sh@28 -- # nmic_status=0 00:16:38.906 04:00:13 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:38.906 04:00:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.906 04:00:13 -- common/autotest_common.sh@10 -- # set +x 00:16:38.906 [2024-11-08 04:00:13.891606] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:38.906 [2024-11-08 04:00:13.891737] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:38.906 [2024-11-08 04:00:13.891806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.906 2024/11/08 04:00:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:38.906 request: 00:16:38.906 { 00:16:38.906 "method": "nvmf_subsystem_add_ns", 00:16:38.906 "params": { 00:16:38.906 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:38.906 "namespace": { 00:16:38.906 "bdev_name": "Malloc0" 00:16:38.906 } 00:16:38.906 } 00:16:38.906 } 00:16:38.906 Got JSON-RPC error response 00:16:38.906 GoRPCClient: error on JSON-RPC call 00:16:38.906 Adding namespace failed - expected result. 00:16:38.906 test case2: host connect to nvmf target in multiple paths 00:16:38.906 04:00:13 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:38.906 04:00:13 -- target/nmic.sh@29 -- # nmic_status=1 00:16:38.906 04:00:13 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:38.906 04:00:13 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:38.906 04:00:13 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:38.906 04:00:13 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:38.906 04:00:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.906 04:00:13 -- common/autotest_common.sh@10 -- # set +x 00:16:38.906 [2024-11-08 04:00:13.903722] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:38.906 04:00:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.906 04:00:13 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:39.166 04:00:14 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:39.166 04:00:14 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:39.166 04:00:14 -- common/autotest_common.sh@1187 -- # local i=0 00:16:39.166 04:00:14 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:16:39.166 04:00:14 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:16:39.166 04:00:14 -- common/autotest_common.sh@1194 -- # sleep 2 00:16:41.726 04:00:16 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:16:41.726 04:00:16 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:16:41.726 04:00:16 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:16:41.726 04:00:16 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:16:41.726 04:00:16 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:16:41.726 04:00:16 -- common/autotest_common.sh@1197 -- # return 0 00:16:41.726 04:00:16 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:41.726 [global] 00:16:41.726 thread=1 00:16:41.726 invalidate=1 00:16:41.726 rw=write 00:16:41.726 time_based=1 00:16:41.726 runtime=1 00:16:41.726 ioengine=libaio 00:16:41.726 direct=1 00:16:41.726 bs=4096 00:16:41.726 iodepth=1 00:16:41.726 norandommap=0 00:16:41.726 numjobs=1 00:16:41.726 00:16:41.726 verify_dump=1 00:16:41.726 verify_backlog=512 00:16:41.726 verify_state_save=0 00:16:41.726 do_verify=1 00:16:41.726 verify=crc32c-intel 00:16:41.726 [job0] 00:16:41.726 filename=/dev/nvme0n1 00:16:41.726 Could not set queue depth (nvme0n1) 00:16:41.726 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:41.726 fio-3.35 00:16:41.726 Starting 1 thread 00:16:42.663 00:16:42.663 job0: (groupid=0, jobs=1): err= 0: pid=76162: Fri Nov 8 04:00:17 2024 00:16:42.663 read: IOPS=3075, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:16:42.663 slat (nsec): min=13214, max=82046, avg=16416.33, stdev=5802.13 00:16:42.663 clat (usec): min=122, max=485, avg=152.62, stdev=19.05 00:16:42.663 lat (usec): min=136, max=515, avg=169.04, stdev=20.04 00:16:42.663 clat percentiles (usec): 00:16:42.663 | 1.00th=[ 127], 5.00th=[ 131], 10.00th=[ 135], 20.00th=[ 139], 00:16:42.663 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 153], 00:16:42.663 | 70.00th=[ 159], 80.00th=[ 165], 90.00th=[ 178], 95.00th=[ 188], 00:16:42.663 | 99.00th=[ 215], 99.50th=[ 223], 99.90th=[ 239], 99.95th=[ 245], 00:16:42.663 | 99.99th=[ 486] 00:16:42.664 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:16:42.664 slat (usec): min=19, max=156, avg=25.30, stdev= 8.42 00:16:42.664 clat (usec): min=82, max=307, avg=105.37, stdev=14.62 00:16:42.664 lat (usec): min=103, max=329, avg=130.67, stdev=17.55 00:16:42.664 clat percentiles (usec): 00:16:42.664 | 1.00th=[ 87], 5.00th=[ 90], 10.00th=[ 92], 20.00th=[ 95], 00:16:42.664 | 30.00th=[ 97], 40.00th=[ 99], 50.00th=[ 102], 60.00th=[ 105], 00:16:42.664 | 70.00th=[ 110], 80.00th=[ 116], 90.00th=[ 125], 95.00th=[ 135], 00:16:42.664 | 99.00th=[ 153], 99.50th=[ 159], 99.90th=[ 188], 99.95th=[ 235], 00:16:42.664 | 99.99th=[ 310] 00:16:42.664 bw ( KiB/s): min=14080, max=14080, per=98.31%, avg=14080.00, stdev= 0.00, samples=1 00:16:42.664 iops : min= 3520, max= 3520, avg=3520.00, stdev= 0.00, samples=1 00:16:42.664 lat (usec) : 100=23.29%, 250=76.68%, 500=0.03% 00:16:42.664 cpu : usr=1.90%, sys=10.70%, ctx=6664, majf=0, minf=5 00:16:42.664 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:42.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:42.664 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:42.664 issued rwts: total=3079,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:42.664 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:42.664 00:16:42.664 Run status group 0 (all jobs): 00:16:42.664 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:16:42.664 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:16:42.664 00:16:42.664 Disk stats (read/write): 00:16:42.664 nvme0n1: ios=2911/3072, merge=0/0, ticks=506/381, in_queue=887, util=91.48% 00:16:42.664 04:00:17 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:42.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:42.664 04:00:17 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:42.664 04:00:17 -- common/autotest_common.sh@1208 -- # local i=0 00:16:42.664 04:00:17 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:16:42.664 04:00:17 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:42.922 04:00:17 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:16:42.922 04:00:17 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:42.922 04:00:17 -- common/autotest_common.sh@1220 -- # return 0 00:16:42.922 04:00:17 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:42.922 04:00:17 -- target/nmic.sh@53 -- # nvmftestfini 00:16:42.922 04:00:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:42.922 04:00:17 -- nvmf/common.sh@116 -- # sync 00:16:42.922 04:00:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:42.922 04:00:17 -- nvmf/common.sh@119 -- # set +e 00:16:42.922 04:00:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:42.922 04:00:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:42.922 rmmod nvme_tcp 00:16:42.922 rmmod nvme_fabrics 00:16:42.922 rmmod nvme_keyring 00:16:42.922 04:00:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:42.922 04:00:17 -- nvmf/common.sh@123 -- # set -e 00:16:42.922 04:00:17 -- nvmf/common.sh@124 -- # return 0 00:16:42.922 04:00:17 -- nvmf/common.sh@477 -- # '[' -n 76050 ']' 00:16:42.922 04:00:17 -- nvmf/common.sh@478 -- # killprocess 76050 00:16:42.922 04:00:17 -- common/autotest_common.sh@936 -- # '[' -z 76050 ']' 00:16:42.922 04:00:17 -- common/autotest_common.sh@940 -- # kill -0 76050 00:16:42.922 04:00:17 -- common/autotest_common.sh@941 -- # uname 00:16:42.922 04:00:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:42.922 04:00:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76050 00:16:42.922 killing process with pid 76050 00:16:42.922 04:00:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:42.922 04:00:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:42.922 04:00:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76050' 00:16:42.922 04:00:17 -- common/autotest_common.sh@955 -- # kill 76050 00:16:42.922 04:00:17 -- common/autotest_common.sh@960 -- # wait 76050 00:16:43.490 04:00:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:43.490 04:00:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:43.490 04:00:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:43.490 04:00:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:43.490 04:00:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:43.490 04:00:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.490 04:00:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.490 04:00:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.490 04:00:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:43.490 00:16:43.490 real 0m6.086s 00:16:43.490 user 0m20.470s 00:16:43.490 sys 0m1.338s 00:16:43.490 04:00:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:43.490 04:00:18 -- common/autotest_common.sh@10 -- # set +x 00:16:43.490 ************************************ 00:16:43.490 END TEST nvmf_nmic 00:16:43.490 ************************************ 00:16:43.490 04:00:18 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:43.490 04:00:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:43.490 04:00:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:43.490 04:00:18 -- common/autotest_common.sh@10 -- # set +x 00:16:43.490 ************************************ 00:16:43.490 START TEST nvmf_fio_target 00:16:43.490 ************************************ 00:16:43.490 04:00:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:43.490 * Looking for test storage... 00:16:43.490 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:43.490 04:00:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:43.490 04:00:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:43.490 04:00:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:43.490 04:00:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:43.490 04:00:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:43.490 04:00:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:43.490 04:00:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:43.490 04:00:18 -- scripts/common.sh@335 -- # IFS=.-: 00:16:43.490 04:00:18 -- scripts/common.sh@335 -- # read -ra ver1 00:16:43.490 04:00:18 -- scripts/common.sh@336 -- # IFS=.-: 00:16:43.490 04:00:18 -- scripts/common.sh@336 -- # read -ra ver2 00:16:43.490 04:00:18 -- scripts/common.sh@337 -- # local 'op=<' 00:16:43.490 04:00:18 -- scripts/common.sh@339 -- # ver1_l=2 00:16:43.490 04:00:18 -- scripts/common.sh@340 -- # ver2_l=1 00:16:43.490 04:00:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:43.490 04:00:18 -- scripts/common.sh@343 -- # case "$op" in 00:16:43.490 04:00:18 -- scripts/common.sh@344 -- # : 1 00:16:43.490 04:00:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:43.490 04:00:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:43.490 04:00:18 -- scripts/common.sh@364 -- # decimal 1 00:16:43.490 04:00:18 -- scripts/common.sh@352 -- # local d=1 00:16:43.490 04:00:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:43.490 04:00:18 -- scripts/common.sh@354 -- # echo 1 00:16:43.490 04:00:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:43.490 04:00:18 -- scripts/common.sh@365 -- # decimal 2 00:16:43.490 04:00:18 -- scripts/common.sh@352 -- # local d=2 00:16:43.490 04:00:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:43.490 04:00:18 -- scripts/common.sh@354 -- # echo 2 00:16:43.490 04:00:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:43.490 04:00:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:43.490 04:00:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:43.490 04:00:18 -- scripts/common.sh@367 -- # return 0 00:16:43.490 04:00:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:43.490 04:00:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:43.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.490 --rc genhtml_branch_coverage=1 00:16:43.490 --rc genhtml_function_coverage=1 00:16:43.490 --rc genhtml_legend=1 00:16:43.490 --rc geninfo_all_blocks=1 00:16:43.490 --rc geninfo_unexecuted_blocks=1 00:16:43.490 00:16:43.490 ' 00:16:43.490 04:00:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:43.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.490 --rc genhtml_branch_coverage=1 00:16:43.490 --rc genhtml_function_coverage=1 00:16:43.490 --rc genhtml_legend=1 00:16:43.490 --rc geninfo_all_blocks=1 00:16:43.490 --rc geninfo_unexecuted_blocks=1 00:16:43.490 00:16:43.490 ' 00:16:43.490 04:00:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:43.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.490 --rc genhtml_branch_coverage=1 00:16:43.490 --rc genhtml_function_coverage=1 00:16:43.490 --rc genhtml_legend=1 00:16:43.490 --rc geninfo_all_blocks=1 00:16:43.490 --rc geninfo_unexecuted_blocks=1 00:16:43.490 00:16:43.490 ' 00:16:43.490 04:00:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:43.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.490 --rc genhtml_branch_coverage=1 00:16:43.490 --rc genhtml_function_coverage=1 00:16:43.490 --rc genhtml_legend=1 00:16:43.490 --rc geninfo_all_blocks=1 00:16:43.490 --rc geninfo_unexecuted_blocks=1 00:16:43.490 00:16:43.490 ' 00:16:43.490 04:00:18 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:43.490 04:00:18 -- nvmf/common.sh@7 -- # uname -s 00:16:43.490 04:00:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:43.490 04:00:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:43.490 04:00:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:43.490 04:00:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:43.490 04:00:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:43.490 04:00:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:43.490 04:00:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:43.490 04:00:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:43.490 04:00:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:43.490 04:00:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:43.490 04:00:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:16:43.490 04:00:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:16:43.490 04:00:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:43.491 04:00:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:43.491 04:00:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:43.491 04:00:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:43.491 04:00:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:43.491 04:00:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:43.491 04:00:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:43.491 04:00:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.491 04:00:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.491 04:00:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.491 04:00:18 -- paths/export.sh@5 -- # export PATH 00:16:43.491 04:00:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.491 04:00:18 -- nvmf/common.sh@46 -- # : 0 00:16:43.491 04:00:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:43.491 04:00:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:43.491 04:00:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:43.491 04:00:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:43.491 04:00:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:43.491 04:00:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:43.491 04:00:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:43.491 04:00:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:43.491 04:00:18 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:43.491 04:00:18 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:43.491 04:00:18 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:43.491 04:00:18 -- target/fio.sh@16 -- # nvmftestinit 00:16:43.491 04:00:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:43.491 04:00:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:43.491 04:00:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:43.491 04:00:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:43.491 04:00:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:43.491 04:00:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.491 04:00:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.491 04:00:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.491 04:00:18 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:43.491 04:00:18 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:43.491 04:00:18 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:43.491 04:00:18 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:43.491 04:00:18 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:43.491 04:00:18 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:43.491 04:00:18 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:43.491 04:00:18 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:43.491 04:00:18 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:43.491 04:00:18 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:43.491 04:00:18 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:43.491 04:00:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:43.491 04:00:18 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:43.491 04:00:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:43.491 04:00:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:43.491 04:00:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:43.491 04:00:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:43.491 04:00:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:43.491 04:00:18 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:43.491 04:00:18 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:43.491 Cannot find device "nvmf_tgt_br" 00:16:43.491 04:00:18 -- nvmf/common.sh@154 -- # true 00:16:43.491 04:00:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:43.750 Cannot find device "nvmf_tgt_br2" 00:16:43.750 04:00:18 -- nvmf/common.sh@155 -- # true 00:16:43.750 04:00:18 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:43.750 04:00:18 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:43.750 Cannot find device "nvmf_tgt_br" 00:16:43.750 04:00:18 -- nvmf/common.sh@157 -- # true 00:16:43.750 04:00:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:43.750 Cannot find device "nvmf_tgt_br2" 00:16:43.750 04:00:18 -- nvmf/common.sh@158 -- # true 00:16:43.750 04:00:18 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:43.750 04:00:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:43.750 04:00:18 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:43.750 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:43.750 04:00:18 -- nvmf/common.sh@161 -- # true 00:16:43.750 04:00:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:43.750 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:43.750 04:00:18 -- nvmf/common.sh@162 -- # true 00:16:43.750 04:00:18 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:43.750 04:00:18 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:43.750 04:00:18 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:43.750 04:00:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:43.750 04:00:18 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:43.750 04:00:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:43.750 04:00:18 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:43.750 04:00:18 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:43.750 04:00:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:43.750 04:00:18 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:43.750 04:00:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:43.750 04:00:18 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:43.750 04:00:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:43.750 04:00:18 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:43.750 04:00:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:43.750 04:00:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:43.750 04:00:18 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:43.750 04:00:18 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:43.750 04:00:18 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:43.750 04:00:18 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:44.008 04:00:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:44.008 04:00:18 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:44.008 04:00:18 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:44.008 04:00:18 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:44.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:44.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:16:44.008 00:16:44.008 --- 10.0.0.2 ping statistics --- 00:16:44.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.008 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:16:44.008 04:00:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:44.008 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:44.008 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:16:44.008 00:16:44.008 --- 10.0.0.3 ping statistics --- 00:16:44.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.008 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:16:44.008 04:00:18 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:44.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:44.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:16:44.008 00:16:44.008 --- 10.0.0.1 ping statistics --- 00:16:44.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.008 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:16:44.009 04:00:18 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:44.009 04:00:18 -- nvmf/common.sh@421 -- # return 0 00:16:44.009 04:00:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:44.009 04:00:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:44.009 04:00:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:44.009 04:00:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:44.009 04:00:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:44.009 04:00:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:44.009 04:00:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:44.009 04:00:18 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:44.009 04:00:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:44.009 04:00:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:44.009 04:00:18 -- common/autotest_common.sh@10 -- # set +x 00:16:44.009 04:00:18 -- nvmf/common.sh@469 -- # nvmfpid=76347 00:16:44.009 04:00:18 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:44.009 04:00:18 -- nvmf/common.sh@470 -- # waitforlisten 76347 00:16:44.009 04:00:18 -- common/autotest_common.sh@829 -- # '[' -z 76347 ']' 00:16:44.009 04:00:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.009 04:00:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:44.009 04:00:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.009 04:00:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:44.009 04:00:18 -- common/autotest_common.sh@10 -- # set +x 00:16:44.009 [2024-11-08 04:00:18.989679] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:44.009 [2024-11-08 04:00:18.990075] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:44.267 [2024-11-08 04:00:19.120676] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:44.267 [2024-11-08 04:00:19.207459] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:44.267 [2024-11-08 04:00:19.207610] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:44.267 [2024-11-08 04:00:19.207623] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:44.267 [2024-11-08 04:00:19.207631] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:44.267 [2024-11-08 04:00:19.207767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.267 [2024-11-08 04:00:19.207917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:44.267 [2024-11-08 04:00:19.208470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:44.267 [2024-11-08 04:00:19.208481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.201 04:00:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:45.201 04:00:20 -- common/autotest_common.sh@862 -- # return 0 00:16:45.201 04:00:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:45.201 04:00:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:45.201 04:00:20 -- common/autotest_common.sh@10 -- # set +x 00:16:45.201 04:00:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:45.201 04:00:20 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:45.460 [2024-11-08 04:00:20.325020] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:45.460 04:00:20 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:45.717 04:00:20 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:45.717 04:00:20 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:45.976 04:00:20 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:45.976 04:00:20 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:46.234 04:00:21 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:46.234 04:00:21 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:46.493 04:00:21 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:46.493 04:00:21 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:46.493 04:00:21 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:47.059 04:00:21 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:47.059 04:00:21 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:47.059 04:00:22 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:47.059 04:00:22 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:47.317 04:00:22 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:47.317 04:00:22 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:47.576 04:00:22 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:47.834 04:00:22 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:47.834 04:00:22 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:48.092 04:00:23 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:48.092 04:00:23 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:48.351 04:00:23 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:48.609 [2024-11-08 04:00:23.543092] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:48.609 04:00:23 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:48.868 04:00:23 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:49.126 04:00:23 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:49.126 04:00:24 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:49.126 04:00:24 -- common/autotest_common.sh@1187 -- # local i=0 00:16:49.126 04:00:24 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:16:49.126 04:00:24 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:16:49.126 04:00:24 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:16:49.126 04:00:24 -- common/autotest_common.sh@1194 -- # sleep 2 00:16:51.659 04:00:26 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:16:51.659 04:00:26 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:16:51.659 04:00:26 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:16:51.659 04:00:26 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:16:51.659 04:00:26 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:16:51.659 04:00:26 -- common/autotest_common.sh@1197 -- # return 0 00:16:51.659 04:00:26 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:51.659 [global] 00:16:51.659 thread=1 00:16:51.659 invalidate=1 00:16:51.659 rw=write 00:16:51.659 time_based=1 00:16:51.659 runtime=1 00:16:51.659 ioengine=libaio 00:16:51.659 direct=1 00:16:51.659 bs=4096 00:16:51.659 iodepth=1 00:16:51.659 norandommap=0 00:16:51.659 numjobs=1 00:16:51.659 00:16:51.659 verify_dump=1 00:16:51.659 verify_backlog=512 00:16:51.659 verify_state_save=0 00:16:51.659 do_verify=1 00:16:51.659 verify=crc32c-intel 00:16:51.659 [job0] 00:16:51.659 filename=/dev/nvme0n1 00:16:51.659 [job1] 00:16:51.659 filename=/dev/nvme0n2 00:16:51.659 [job2] 00:16:51.659 filename=/dev/nvme0n3 00:16:51.659 [job3] 00:16:51.659 filename=/dev/nvme0n4 00:16:51.659 Could not set queue depth (nvme0n1) 00:16:51.659 Could not set queue depth (nvme0n2) 00:16:51.659 Could not set queue depth (nvme0n3) 00:16:51.659 Could not set queue depth (nvme0n4) 00:16:51.659 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:51.659 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:51.659 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:51.659 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:51.659 fio-3.35 00:16:51.659 Starting 4 threads 00:16:52.594 00:16:52.594 job0: (groupid=0, jobs=1): err= 0: pid=76635: Fri Nov 8 04:00:27 2024 00:16:52.594 read: IOPS=978, BW=3912KiB/s (4006kB/s)(3916KiB/1001msec) 00:16:52.594 slat (nsec): min=11805, max=86782, avg=21803.79, stdev=7354.25 00:16:52.594 clat (usec): min=273, max=41997, avg=608.07, stdev=1326.13 00:16:52.594 lat (usec): min=298, max=42009, avg=629.87, stdev=1325.85 00:16:52.594 clat percentiles (usec): 00:16:52.594 | 1.00th=[ 400], 5.00th=[ 457], 10.00th=[ 486], 20.00th=[ 510], 00:16:52.594 | 30.00th=[ 529], 40.00th=[ 545], 50.00th=[ 562], 60.00th=[ 578], 00:16:52.594 | 70.00th=[ 594], 80.00th=[ 619], 90.00th=[ 652], 95.00th=[ 693], 00:16:52.594 | 99.00th=[ 783], 99.50th=[ 799], 99.90th=[42206], 99.95th=[42206], 00:16:52.594 | 99.99th=[42206] 00:16:52.594 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:16:52.594 slat (usec): min=17, max=117, avg=31.13, stdev= 9.07 00:16:52.594 clat (usec): min=165, max=545, avg=338.57, stdev=59.09 00:16:52.594 lat (usec): min=198, max=581, avg=369.70, stdev=57.77 00:16:52.594 clat percentiles (usec): 00:16:52.594 | 1.00th=[ 210], 5.00th=[ 237], 10.00th=[ 258], 20.00th=[ 285], 00:16:52.594 | 30.00th=[ 310], 40.00th=[ 330], 50.00th=[ 343], 60.00th=[ 359], 00:16:52.594 | 70.00th=[ 371], 80.00th=[ 392], 90.00th=[ 412], 95.00th=[ 429], 00:16:52.594 | 99.00th=[ 465], 99.50th=[ 490], 99.90th=[ 537], 99.95th=[ 545], 00:16:52.594 | 99.99th=[ 545] 00:16:52.594 bw ( KiB/s): min= 4096, max= 4096, per=17.09%, avg=4096.00, stdev= 0.00, samples=1 00:16:52.594 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:52.594 lat (usec) : 250=4.19%, 500=53.77%, 750=41.14%, 1000=0.85% 00:16:52.594 lat (msec) : 50=0.05% 00:16:52.594 cpu : usr=1.30%, sys=4.20%, ctx=2003, majf=0, minf=15 00:16:52.594 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:52.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.594 issued rwts: total=979,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:52.594 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:52.594 job1: (groupid=0, jobs=1): err= 0: pid=76636: Fri Nov 8 04:00:27 2024 00:16:52.594 read: IOPS=1385, BW=5542KiB/s (5675kB/s)(5548KiB/1001msec) 00:16:52.595 slat (nsec): min=16213, max=97414, avg=24339.68, stdev=10170.42 00:16:52.595 clat (usec): min=154, max=2129, avg=348.67, stdev=119.13 00:16:52.595 lat (usec): min=171, max=2147, avg=373.01, stdev=124.09 00:16:52.595 clat percentiles (usec): 00:16:52.595 | 1.00th=[ 174], 5.00th=[ 188], 10.00th=[ 204], 20.00th=[ 277], 00:16:52.595 | 30.00th=[ 289], 40.00th=[ 302], 50.00th=[ 314], 60.00th=[ 334], 00:16:52.595 | 70.00th=[ 392], 80.00th=[ 457], 90.00th=[ 515], 95.00th=[ 545], 00:16:52.595 | 99.00th=[ 619], 99.50th=[ 627], 99.90th=[ 1074], 99.95th=[ 2114], 00:16:52.595 | 99.99th=[ 2114] 00:16:52.595 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:52.595 slat (usec): min=25, max=118, avg=37.51, stdev=11.15 00:16:52.595 clat (usec): min=145, max=483, avg=271.67, stdev=56.35 00:16:52.595 lat (usec): min=182, max=524, avg=309.18, stdev=60.09 00:16:52.595 clat percentiles (usec): 00:16:52.595 | 1.00th=[ 198], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 229], 00:16:52.595 | 30.00th=[ 237], 40.00th=[ 245], 50.00th=[ 255], 60.00th=[ 265], 00:16:52.595 | 70.00th=[ 281], 80.00th=[ 310], 90.00th=[ 375], 95.00th=[ 396], 00:16:52.595 | 99.00th=[ 429], 99.50th=[ 445], 99.90th=[ 469], 99.95th=[ 486], 00:16:52.595 | 99.99th=[ 486] 00:16:52.595 bw ( KiB/s): min= 8192, max= 8192, per=34.18%, avg=8192.00, stdev= 0.00, samples=1 00:16:52.595 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:52.595 lat (usec) : 250=30.96%, 500=63.29%, 750=5.68% 00:16:52.595 lat (msec) : 2=0.03%, 4=0.03% 00:16:52.595 cpu : usr=1.80%, sys=6.80%, ctx=2923, majf=0, minf=4 00:16:52.595 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:52.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.595 issued rwts: total=1387,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:52.595 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:52.595 job2: (groupid=0, jobs=1): err= 0: pid=76637: Fri Nov 8 04:00:27 2024 00:16:52.595 read: IOPS=1004, BW=4020KiB/s (4116kB/s)(4024KiB/1001msec) 00:16:52.595 slat (usec): min=15, max=102, avg=21.66, stdev= 6.05 00:16:52.595 clat (usec): min=140, max=41946, avg=598.81, stdev=1307.45 00:16:52.595 lat (usec): min=162, max=41968, avg=620.48, stdev=1307.43 00:16:52.595 clat percentiles (usec): 00:16:52.595 | 1.00th=[ 186], 5.00th=[ 437], 10.00th=[ 486], 20.00th=[ 510], 00:16:52.595 | 30.00th=[ 529], 40.00th=[ 545], 50.00th=[ 562], 60.00th=[ 578], 00:16:52.595 | 70.00th=[ 594], 80.00th=[ 619], 90.00th=[ 644], 95.00th=[ 676], 00:16:52.595 | 99.00th=[ 742], 99.50th=[ 750], 99.90th=[ 840], 99.95th=[42206], 00:16:52.595 | 99.99th=[42206] 00:16:52.595 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:16:52.595 slat (nsec): min=18431, max=84867, avg=30505.86, stdev=7831.83 00:16:52.595 clat (usec): min=140, max=653, avg=331.72, stdev=68.31 00:16:52.595 lat (usec): min=171, max=702, avg=362.23, stdev=67.45 00:16:52.595 clat percentiles (usec): 00:16:52.595 | 1.00th=[ 157], 5.00th=[ 194], 10.00th=[ 237], 20.00th=[ 281], 00:16:52.595 | 30.00th=[ 310], 40.00th=[ 326], 50.00th=[ 343], 60.00th=[ 355], 00:16:52.595 | 70.00th=[ 367], 80.00th=[ 383], 90.00th=[ 412], 95.00th=[ 433], 00:16:52.595 | 99.00th=[ 478], 99.50th=[ 486], 99.90th=[ 515], 99.95th=[ 652], 00:16:52.595 | 99.99th=[ 652] 00:16:52.595 bw ( KiB/s): min= 4096, max= 4096, per=17.09%, avg=4096.00, stdev= 0.00, samples=1 00:16:52.595 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:52.595 lat (usec) : 250=7.34%, 500=50.00%, 750=42.27%, 1000=0.34% 00:16:52.595 lat (msec) : 50=0.05% 00:16:52.595 cpu : usr=0.70%, sys=4.70%, ctx=2030, majf=0, minf=13 00:16:52.595 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:52.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.595 issued rwts: total=1006,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:52.595 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:52.595 job3: (groupid=0, jobs=1): err= 0: pid=76638: Fri Nov 8 04:00:27 2024 00:16:52.595 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:16:52.595 slat (nsec): min=12444, max=58329, avg=16188.59, stdev=4368.50 00:16:52.595 clat (usec): min=175, max=336, avg=222.60, stdev=24.01 00:16:52.595 lat (usec): min=189, max=353, avg=238.79, stdev=24.44 00:16:52.595 clat percentiles (usec): 00:16:52.595 | 1.00th=[ 184], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 202], 00:16:52.595 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 219], 60.00th=[ 225], 00:16:52.595 | 70.00th=[ 233], 80.00th=[ 241], 90.00th=[ 255], 95.00th=[ 269], 00:16:52.595 | 99.00th=[ 293], 99.50th=[ 302], 99.90th=[ 318], 99.95th=[ 330], 00:16:52.595 | 99.99th=[ 338] 00:16:52.595 write: IOPS=2411, BW=9646KiB/s (9878kB/s)(9656KiB/1001msec); 0 zone resets 00:16:52.595 slat (usec): min=18, max=151, avg=24.86, stdev= 7.18 00:16:52.595 clat (usec): min=127, max=401, avg=183.94, stdev=27.82 00:16:52.595 lat (usec): min=148, max=422, avg=208.80, stdev=29.56 00:16:52.595 clat percentiles (usec): 00:16:52.595 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 161], 00:16:52.595 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 180], 60.00th=[ 186], 00:16:52.595 | 70.00th=[ 194], 80.00th=[ 204], 90.00th=[ 221], 95.00th=[ 237], 00:16:52.595 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 375], 99.95th=[ 388], 00:16:52.595 | 99.99th=[ 404] 00:16:52.595 bw ( KiB/s): min= 9272, max= 9272, per=38.68%, avg=9272.00, stdev= 0.00, samples=1 00:16:52.595 iops : min= 2318, max= 2318, avg=2318.00, stdev= 0.00, samples=1 00:16:52.595 lat (usec) : 250=92.60%, 500=7.40% 00:16:52.595 cpu : usr=1.60%, sys=6.70%, ctx=4467, majf=0, minf=13 00:16:52.595 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:52.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.595 issued rwts: total=2048,2414,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:52.595 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:52.595 00:16:52.595 Run status group 0 (all jobs): 00:16:52.595 READ: bw=21.2MiB/s (22.2MB/s), 3912KiB/s-8184KiB/s (4006kB/s-8380kB/s), io=21.2MiB (22.2MB), run=1001-1001msec 00:16:52.595 WRITE: bw=23.4MiB/s (24.5MB/s), 4092KiB/s-9646KiB/s (4190kB/s-9878kB/s), io=23.4MiB (24.6MB), run=1001-1001msec 00:16:52.595 00:16:52.595 Disk stats (read/write): 00:16:52.595 nvme0n1: ios=793/1024, merge=0/0, ticks=483/373, in_queue=856, util=87.58% 00:16:52.595 nvme0n2: ios=1073/1453, merge=0/0, ticks=418/434, in_queue=852, util=88.95% 00:16:52.595 nvme0n3: ios=755/1024, merge=0/0, ticks=459/354, in_queue=813, util=88.38% 00:16:52.595 nvme0n4: ios=1746/2048, merge=0/0, ticks=403/410, in_queue=813, util=89.73% 00:16:52.595 04:00:27 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:52.595 [global] 00:16:52.595 thread=1 00:16:52.595 invalidate=1 00:16:52.595 rw=randwrite 00:16:52.595 time_based=1 00:16:52.595 runtime=1 00:16:52.595 ioengine=libaio 00:16:52.595 direct=1 00:16:52.595 bs=4096 00:16:52.595 iodepth=1 00:16:52.595 norandommap=0 00:16:52.595 numjobs=1 00:16:52.595 00:16:52.595 verify_dump=1 00:16:52.595 verify_backlog=512 00:16:52.595 verify_state_save=0 00:16:52.595 do_verify=1 00:16:52.595 verify=crc32c-intel 00:16:52.595 [job0] 00:16:52.595 filename=/dev/nvme0n1 00:16:52.595 [job1] 00:16:52.595 filename=/dev/nvme0n2 00:16:52.595 [job2] 00:16:52.595 filename=/dev/nvme0n3 00:16:52.595 [job3] 00:16:52.595 filename=/dev/nvme0n4 00:16:52.595 Could not set queue depth (nvme0n1) 00:16:52.595 Could not set queue depth (nvme0n2) 00:16:52.595 Could not set queue depth (nvme0n3) 00:16:52.595 Could not set queue depth (nvme0n4) 00:16:52.854 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:52.854 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:52.854 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:52.854 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:52.854 fio-3.35 00:16:52.854 Starting 4 threads 00:16:54.230 00:16:54.230 job0: (groupid=0, jobs=1): err= 0: pid=76704: Fri Nov 8 04:00:28 2024 00:16:54.230 read: IOPS=1392, BW=5570KiB/s (5704kB/s)(5576KiB/1001msec) 00:16:54.230 slat (nsec): min=16430, max=90278, avg=25127.41, stdev=9280.39 00:16:54.230 clat (usec): min=163, max=667, avg=341.37, stdev=44.75 00:16:54.230 lat (usec): min=182, max=699, avg=366.50, stdev=43.02 00:16:54.230 clat percentiles (usec): 00:16:54.230 | 1.00th=[ 217], 5.00th=[ 277], 10.00th=[ 293], 20.00th=[ 310], 00:16:54.230 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 338], 60.00th=[ 347], 00:16:54.230 | 70.00th=[ 359], 80.00th=[ 375], 90.00th=[ 400], 95.00th=[ 416], 00:16:54.230 | 99.00th=[ 453], 99.50th=[ 494], 99.90th=[ 562], 99.95th=[ 668], 00:16:54.230 | 99.99th=[ 668] 00:16:54.230 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:54.230 slat (usec): min=27, max=149, avg=38.47, stdev= 9.02 00:16:54.230 clat (usec): min=141, max=472, avg=274.94, stdev=47.02 00:16:54.230 lat (usec): min=177, max=516, avg=313.41, stdev=46.96 00:16:54.230 clat percentiles (usec): 00:16:54.230 | 1.00th=[ 196], 5.00th=[ 215], 10.00th=[ 225], 20.00th=[ 235], 00:16:54.230 | 30.00th=[ 247], 40.00th=[ 258], 50.00th=[ 269], 60.00th=[ 281], 00:16:54.230 | 70.00th=[ 289], 80.00th=[ 310], 90.00th=[ 338], 95.00th=[ 375], 00:16:54.230 | 99.00th=[ 404], 99.50th=[ 416], 99.90th=[ 445], 99.95th=[ 474], 00:16:54.230 | 99.99th=[ 474] 00:16:54.230 bw ( KiB/s): min= 8120, max= 8120, per=26.84%, avg=8120.00, stdev= 0.00, samples=1 00:16:54.230 iops : min= 2030, max= 2030, avg=2030.00, stdev= 0.00, samples=1 00:16:54.230 lat (usec) : 250=18.36%, 500=81.43%, 750=0.20% 00:16:54.230 cpu : usr=1.90%, sys=7.00%, ctx=2931, majf=0, minf=11 00:16:54.230 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:54.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.230 issued rwts: total=1394,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.230 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:54.230 job1: (groupid=0, jobs=1): err= 0: pid=76705: Fri Nov 8 04:00:28 2024 00:16:54.230 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:16:54.230 slat (nsec): min=11144, max=76144, avg=15855.44, stdev=5938.36 00:16:54.230 clat (usec): min=136, max=758, avg=224.65, stdev=38.82 00:16:54.230 lat (usec): min=148, max=773, avg=240.51, stdev=39.46 00:16:54.230 clat percentiles (usec): 00:16:54.230 | 1.00th=[ 159], 5.00th=[ 176], 10.00th=[ 186], 20.00th=[ 196], 00:16:54.230 | 30.00th=[ 204], 40.00th=[ 210], 50.00th=[ 221], 60.00th=[ 229], 00:16:54.230 | 70.00th=[ 241], 80.00th=[ 251], 90.00th=[ 269], 95.00th=[ 285], 00:16:54.230 | 99.00th=[ 318], 99.50th=[ 334], 99.90th=[ 586], 99.95th=[ 635], 00:16:54.230 | 99.99th=[ 758] 00:16:54.230 write: IOPS=2349, BW=9399KiB/s (9624kB/s)(9408KiB/1001msec); 0 zone resets 00:16:54.230 slat (usec): min=17, max=136, avg=24.31, stdev= 8.09 00:16:54.230 clat (usec): min=104, max=1539, avg=188.24, stdev=46.22 00:16:54.230 lat (usec): min=123, max=1559, avg=212.56, stdev=48.09 00:16:54.230 clat percentiles (usec): 00:16:54.230 | 1.00th=[ 124], 5.00th=[ 139], 10.00th=[ 147], 20.00th=[ 157], 00:16:54.230 | 30.00th=[ 165], 40.00th=[ 174], 50.00th=[ 184], 60.00th=[ 194], 00:16:54.230 | 70.00th=[ 206], 80.00th=[ 219], 90.00th=[ 237], 95.00th=[ 251], 00:16:54.230 | 99.00th=[ 285], 99.50th=[ 310], 99.90th=[ 433], 99.95th=[ 441], 00:16:54.230 | 99.99th=[ 1532] 00:16:54.230 bw ( KiB/s): min= 8848, max= 8848, per=29.25%, avg=8848.00, stdev= 0.00, samples=1 00:16:54.230 iops : min= 2212, max= 2212, avg=2212.00, stdev= 0.00, samples=1 00:16:54.230 lat (usec) : 250=87.09%, 500=12.80%, 750=0.07%, 1000=0.02% 00:16:54.230 lat (msec) : 2=0.02% 00:16:54.230 cpu : usr=1.90%, sys=6.50%, ctx=4401, majf=0, minf=10 00:16:54.230 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:54.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.230 issued rwts: total=2048,2352,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.230 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:54.230 job2: (groupid=0, jobs=1): err= 0: pid=76706: Fri Nov 8 04:00:28 2024 00:16:54.230 read: IOPS=1386, BW=5546KiB/s (5680kB/s)(5552KiB/1001msec) 00:16:54.230 slat (nsec): min=16861, max=86210, avg=24519.39, stdev=8401.45 00:16:54.230 clat (usec): min=182, max=3244, avg=344.01, stdev=84.99 00:16:54.230 lat (usec): min=201, max=3262, avg=368.53, stdev=84.89 00:16:54.230 clat percentiles (usec): 00:16:54.230 | 1.00th=[ 269], 5.00th=[ 293], 10.00th=[ 306], 20.00th=[ 314], 00:16:54.230 | 30.00th=[ 326], 40.00th=[ 334], 50.00th=[ 338], 60.00th=[ 351], 00:16:54.230 | 70.00th=[ 359], 80.00th=[ 367], 90.00th=[ 383], 95.00th=[ 400], 00:16:54.230 | 99.00th=[ 429], 99.50th=[ 457], 99.90th=[ 578], 99.95th=[ 3261], 00:16:54.230 | 99.99th=[ 3261] 00:16:54.230 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:54.230 slat (usec): min=26, max=107, avg=37.56, stdev= 8.85 00:16:54.230 clat (usec): min=128, max=708, avg=275.41, stdev=47.79 00:16:54.230 lat (usec): min=159, max=738, avg=312.97, stdev=47.50 00:16:54.230 clat percentiles (usec): 00:16:54.230 | 1.00th=[ 194], 5.00th=[ 215], 10.00th=[ 223], 20.00th=[ 237], 00:16:54.230 | 30.00th=[ 247], 40.00th=[ 258], 50.00th=[ 273], 60.00th=[ 281], 00:16:54.230 | 70.00th=[ 293], 80.00th=[ 310], 90.00th=[ 343], 95.00th=[ 375], 00:16:54.230 | 99.00th=[ 404], 99.50th=[ 416], 99.90th=[ 445], 99.95th=[ 709], 00:16:54.230 | 99.99th=[ 709] 00:16:54.230 bw ( KiB/s): min= 8096, max= 8096, per=26.76%, avg=8096.00, stdev= 0.00, samples=1 00:16:54.230 iops : min= 2024, max= 2024, avg=2024.00, stdev= 0.00, samples=1 00:16:54.230 lat (usec) : 250=17.41%, 500=82.42%, 750=0.14% 00:16:54.230 lat (msec) : 4=0.03% 00:16:54.230 cpu : usr=1.30%, sys=7.20%, ctx=2924, majf=0, minf=17 00:16:54.230 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:54.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.230 issued rwts: total=1388,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.230 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:54.230 job3: (groupid=0, jobs=1): err= 0: pid=76707: Fri Nov 8 04:00:28 2024 00:16:54.230 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:16:54.230 slat (nsec): min=12213, max=48004, avg=15306.60, stdev=3898.97 00:16:54.230 clat (usec): min=162, max=1727, avg=236.05, stdev=47.40 00:16:54.230 lat (usec): min=175, max=1741, avg=251.36, stdev=47.67 00:16:54.230 clat percentiles (usec): 00:16:54.230 | 1.00th=[ 178], 5.00th=[ 186], 10.00th=[ 194], 20.00th=[ 206], 00:16:54.230 | 30.00th=[ 215], 40.00th=[ 223], 50.00th=[ 233], 60.00th=[ 241], 00:16:54.230 | 70.00th=[ 251], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 297], 00:16:54.230 | 99.00th=[ 326], 99.50th=[ 330], 99.90th=[ 396], 99.95th=[ 562], 00:16:54.230 | 99.99th=[ 1729] 00:16:54.230 write: IOPS=2143, BW=8575KiB/s (8781kB/s)(8584KiB/1001msec); 0 zone resets 00:16:54.230 slat (usec): min=18, max=115, avg=23.07, stdev= 6.15 00:16:54.230 clat (usec): min=116, max=662, avg=200.21, stdev=36.56 00:16:54.230 lat (usec): min=135, max=683, avg=223.28, stdev=37.83 00:16:54.230 clat percentiles (usec): 00:16:54.230 | 1.00th=[ 137], 5.00th=[ 149], 10.00th=[ 157], 20.00th=[ 167], 00:16:54.230 | 30.00th=[ 178], 40.00th=[ 188], 50.00th=[ 198], 60.00th=[ 208], 00:16:54.230 | 70.00th=[ 219], 80.00th=[ 231], 90.00th=[ 247], 95.00th=[ 260], 00:16:54.230 | 99.00th=[ 285], 99.50th=[ 297], 99.90th=[ 347], 99.95th=[ 400], 00:16:54.230 | 99.99th=[ 660] 00:16:54.230 bw ( KiB/s): min= 8256, max= 8256, per=27.29%, avg=8256.00, stdev= 0.00, samples=1 00:16:54.230 iops : min= 2064, max= 2064, avg=2064.00, stdev= 0.00, samples=1 00:16:54.230 lat (usec) : 250=80.40%, 500=19.53%, 750=0.05% 00:16:54.230 lat (msec) : 2=0.02% 00:16:54.230 cpu : usr=1.40%, sys=5.80%, ctx=4196, majf=0, minf=12 00:16:54.230 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:54.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.230 issued rwts: total=2048,2146,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.230 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:54.230 00:16:54.230 Run status group 0 (all jobs): 00:16:54.230 READ: bw=26.8MiB/s (28.1MB/s), 5546KiB/s-8184KiB/s (5680kB/s-8380kB/s), io=26.9MiB (28.2MB), run=1001-1001msec 00:16:54.230 WRITE: bw=29.5MiB/s (31.0MB/s), 6138KiB/s-9399KiB/s (6285kB/s-9624kB/s), io=29.6MiB (31.0MB), run=1001-1001msec 00:16:54.230 00:16:54.230 Disk stats (read/write): 00:16:54.230 nvme0n1: ios=1074/1529, merge=0/0, ticks=403/454, in_queue=857, util=88.48% 00:16:54.230 nvme0n2: ios=1776/2048, merge=0/0, ticks=403/410, in_queue=813, util=88.27% 00:16:54.230 nvme0n3: ios=1024/1514, merge=0/0, ticks=354/429, in_queue=783, util=88.84% 00:16:54.230 nvme0n4: ios=1580/2048, merge=0/0, ticks=375/432, in_queue=807, util=89.78% 00:16:54.230 04:00:28 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:54.230 [global] 00:16:54.230 thread=1 00:16:54.230 invalidate=1 00:16:54.230 rw=write 00:16:54.230 time_based=1 00:16:54.230 runtime=1 00:16:54.230 ioengine=libaio 00:16:54.230 direct=1 00:16:54.230 bs=4096 00:16:54.230 iodepth=128 00:16:54.230 norandommap=0 00:16:54.230 numjobs=1 00:16:54.230 00:16:54.230 verify_dump=1 00:16:54.230 verify_backlog=512 00:16:54.230 verify_state_save=0 00:16:54.230 do_verify=1 00:16:54.230 verify=crc32c-intel 00:16:54.230 [job0] 00:16:54.230 filename=/dev/nvme0n1 00:16:54.230 [job1] 00:16:54.230 filename=/dev/nvme0n2 00:16:54.230 [job2] 00:16:54.230 filename=/dev/nvme0n3 00:16:54.230 [job3] 00:16:54.230 filename=/dev/nvme0n4 00:16:54.230 Could not set queue depth (nvme0n1) 00:16:54.230 Could not set queue depth (nvme0n2) 00:16:54.230 Could not set queue depth (nvme0n3) 00:16:54.230 Could not set queue depth (nvme0n4) 00:16:54.230 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:54.231 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:54.231 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:54.231 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:54.231 fio-3.35 00:16:54.231 Starting 4 threads 00:16:55.607 00:16:55.607 job0: (groupid=0, jobs=1): err= 0: pid=76762: Fri Nov 8 04:00:30 2024 00:16:55.607 read: IOPS=1992, BW=7968KiB/s (8159kB/s)(8016KiB/1006msec) 00:16:55.607 slat (usec): min=4, max=14899, avg=253.77, stdev=1299.23 00:16:55.607 clat (usec): min=2840, max=46568, avg=31385.53, stdev=5678.29 00:16:55.607 lat (usec): min=7962, max=48960, avg=31639.30, stdev=5763.99 00:16:55.607 clat percentiles (usec): 00:16:55.607 | 1.00th=[11994], 5.00th=[21890], 10.00th=[25035], 20.00th=[27132], 00:16:55.607 | 30.00th=[28705], 40.00th=[31327], 50.00th=[32113], 60.00th=[32900], 00:16:55.607 | 70.00th=[34341], 80.00th=[34866], 90.00th=[37487], 95.00th=[40109], 00:16:55.607 | 99.00th=[45351], 99.50th=[46400], 99.90th=[46400], 99.95th=[46400], 00:16:55.607 | 99.99th=[46400] 00:16:55.607 write: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec); 0 zone resets 00:16:55.607 slat (usec): min=10, max=12251, avg=232.83, stdev=1030.72 00:16:55.607 clat (usec): min=12626, max=47677, avg=31279.77, stdev=7182.64 00:16:55.607 lat (usec): min=12651, max=51263, avg=31512.60, stdev=7270.94 00:16:55.607 clat percentiles (usec): 00:16:55.607 | 1.00th=[14091], 5.00th=[19006], 10.00th=[22152], 20.00th=[23987], 00:16:55.607 | 30.00th=[26346], 40.00th=[31327], 50.00th=[33424], 60.00th=[33817], 00:16:55.607 | 70.00th=[35390], 80.00th=[38011], 90.00th=[39584], 95.00th=[40633], 00:16:55.607 | 99.00th=[43779], 99.50th=[45876], 99.90th=[46400], 99.95th=[46400], 00:16:55.607 | 99.99th=[47449] 00:16:55.607 bw ( KiB/s): min= 8192, max= 8192, per=17.63%, avg=8192.00, stdev= 0.00, samples=2 00:16:55.607 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:16:55.607 lat (msec) : 4=0.02%, 10=0.10%, 20=4.39%, 50=95.48% 00:16:55.607 cpu : usr=2.09%, sys=6.27%, ctx=515, majf=0, minf=13 00:16:55.607 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:16:55.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:55.607 issued rwts: total=2004,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.607 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:55.607 job1: (groupid=0, jobs=1): err= 0: pid=76763: Fri Nov 8 04:00:30 2024 00:16:55.607 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:16:55.607 slat (usec): min=5, max=19797, avg=132.21, stdev=693.72 00:16:55.607 clat (usec): min=9803, max=46177, avg=17294.13, stdev=6989.10 00:16:55.607 lat (usec): min=10647, max=46214, avg=17426.35, stdev=7038.30 00:16:55.607 clat percentiles (usec): 00:16:55.607 | 1.00th=[10945], 5.00th=[11994], 10.00th=[12649], 20.00th=[13173], 00:16:55.607 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14091], 60.00th=[14353], 00:16:55.607 | 70.00th=[15139], 80.00th=[23200], 90.00th=[28967], 95.00th=[33817], 00:16:55.607 | 99.00th=[38536], 99.50th=[39060], 99.90th=[39060], 99.95th=[42206], 00:16:55.607 | 99.99th=[46400] 00:16:55.607 write: IOPS=3915, BW=15.3MiB/s (16.0MB/s)(15.4MiB/1005msec); 0 zone resets 00:16:55.607 slat (usec): min=4, max=7290, avg=125.98, stdev=519.56 00:16:55.607 clat (usec): min=3889, max=36151, avg=16244.08, stdev=4542.46 00:16:55.607 lat (usec): min=7612, max=36391, avg=16370.06, stdev=4556.62 00:16:55.607 clat percentiles (usec): 00:16:55.607 | 1.00th=[10814], 5.00th=[11863], 10.00th=[12387], 20.00th=[13042], 00:16:55.607 | 30.00th=[13960], 40.00th=[14484], 50.00th=[14877], 60.00th=[15401], 00:16:55.607 | 70.00th=[15926], 80.00th=[18744], 90.00th=[23200], 95.00th=[26346], 00:16:55.607 | 99.00th=[32375], 99.50th=[33424], 99.90th=[34866], 99.95th=[34866], 00:16:55.607 | 99.99th=[35914] 00:16:55.607 bw ( KiB/s): min=12672, max=17827, per=32.82%, avg=15249.50, stdev=3645.14, samples=2 00:16:55.607 iops : min= 3168, max= 4456, avg=3812.00, stdev=910.75, samples=2 00:16:55.607 lat (msec) : 4=0.01%, 10=0.35%, 20=79.04%, 50=20.60% 00:16:55.607 cpu : usr=3.19%, sys=11.35%, ctx=703, majf=0, minf=7 00:16:55.607 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:55.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:55.607 issued rwts: total=3584,3935,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.607 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:55.607 job2: (groupid=0, jobs=1): err= 0: pid=76764: Fri Nov 8 04:00:30 2024 00:16:55.607 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:16:55.607 slat (usec): min=3, max=10376, avg=143.88, stdev=784.46 00:16:55.607 clat (usec): min=8912, max=38543, avg=18406.85, stdev=6205.97 00:16:55.607 lat (usec): min=8949, max=38559, avg=18550.73, stdev=6267.02 00:16:55.607 clat percentiles (usec): 00:16:55.607 | 1.00th=[10290], 5.00th=[13042], 10.00th=[13698], 20.00th=[14091], 00:16:55.607 | 30.00th=[14615], 40.00th=[15008], 50.00th=[15401], 60.00th=[16188], 00:16:55.607 | 70.00th=[18220], 80.00th=[25560], 90.00th=[29230], 95.00th=[31589], 00:16:55.607 | 99.00th=[33424], 99.50th=[35390], 99.90th=[36439], 99.95th=[36439], 00:16:55.607 | 99.99th=[38536] 00:16:55.607 write: IOPS=3639, BW=14.2MiB/s (14.9MB/s)(14.3MiB/1004msec); 0 zone resets 00:16:55.607 slat (usec): min=5, max=8667, avg=124.90, stdev=666.09 00:16:55.607 clat (usec): min=2074, max=33757, avg=16654.19, stdev=4275.56 00:16:55.607 lat (usec): min=7334, max=33774, avg=16779.09, stdev=4281.67 00:16:55.607 clat percentiles (usec): 00:16:55.607 | 1.00th=[ 8717], 5.00th=[10028], 10.00th=[13173], 20.00th=[14746], 00:16:55.607 | 30.00th=[15401], 40.00th=[15664], 50.00th=[15795], 60.00th=[16057], 00:16:55.607 | 70.00th=[16188], 80.00th=[17171], 90.00th=[24249], 95.00th=[27132], 00:16:55.607 | 99.00th=[29492], 99.50th=[30802], 99.90th=[31851], 99.95th=[32900], 00:16:55.607 | 99.99th=[33817] 00:16:55.607 bw ( KiB/s): min=12288, max=16416, per=30.89%, avg=14352.00, stdev=2918.94, samples=2 00:16:55.607 iops : min= 3072, max= 4104, avg=3588.00, stdev=729.73, samples=2 00:16:55.607 lat (msec) : 4=0.01%, 10=2.89%, 20=76.36%, 50=20.74% 00:16:55.607 cpu : usr=3.19%, sys=10.37%, ctx=515, majf=0, minf=14 00:16:55.607 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:16:55.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:55.607 issued rwts: total=3584,3654,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.607 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:55.607 job3: (groupid=0, jobs=1): err= 0: pid=76765: Fri Nov 8 04:00:30 2024 00:16:55.607 read: IOPS=1941, BW=7765KiB/s (7951kB/s)(7788KiB/1003msec) 00:16:55.607 slat (usec): min=4, max=12187, avg=251.41, stdev=1202.42 00:16:55.607 clat (usec): min=2328, max=50952, avg=31437.37, stdev=6788.85 00:16:55.607 lat (usec): min=7097, max=61121, avg=31688.78, stdev=6880.06 00:16:55.607 clat percentiles (usec): 00:16:55.607 | 1.00th=[ 7898], 5.00th=[20317], 10.00th=[25035], 20.00th=[26346], 00:16:55.607 | 30.00th=[28967], 40.00th=[30278], 50.00th=[32637], 60.00th=[34341], 00:16:55.607 | 70.00th=[34866], 80.00th=[35914], 90.00th=[38536], 95.00th=[41681], 00:16:55.607 | 99.00th=[48497], 99.50th=[50594], 99.90th=[51119], 99.95th=[51119], 00:16:55.607 | 99.99th=[51119] 00:16:55.607 write: IOPS=2041, BW=8167KiB/s (8364kB/s)(8192KiB/1003msec); 0 zone resets 00:16:55.607 slat (usec): min=5, max=12388, avg=241.05, stdev=1134.59 00:16:55.607 clat (usec): min=14921, max=47463, avg=31515.18, stdev=6403.66 00:16:55.607 lat (usec): min=14951, max=48022, avg=31756.23, stdev=6489.41 00:16:55.607 clat percentiles (usec): 00:16:55.607 | 1.00th=[17433], 5.00th=[20055], 10.00th=[21890], 20.00th=[25560], 00:16:55.607 | 30.00th=[27395], 40.00th=[31327], 50.00th=[33162], 60.00th=[33817], 00:16:55.607 | 70.00th=[34866], 80.00th=[36963], 90.00th=[38536], 95.00th=[40109], 00:16:55.607 | 99.00th=[44303], 99.50th=[45876], 99.90th=[46924], 99.95th=[46924], 00:16:55.607 | 99.99th=[47449] 00:16:55.607 bw ( KiB/s): min= 8192, max= 8192, per=17.63%, avg=8192.00, stdev= 0.00, samples=2 00:16:55.607 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:16:55.607 lat (msec) : 4=0.03%, 10=1.05%, 20=3.80%, 50=94.69%, 100=0.43% 00:16:55.607 cpu : usr=2.30%, sys=5.99%, ctx=506, majf=0, minf=19 00:16:55.607 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:16:55.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:55.607 issued rwts: total=1947,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.607 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:55.607 00:16:55.607 Run status group 0 (all jobs): 00:16:55.607 READ: bw=43.2MiB/s (45.3MB/s), 7765KiB/s-13.9MiB/s (7951kB/s-14.6MB/s), io=43.4MiB (45.5MB), run=1003-1006msec 00:16:55.607 WRITE: bw=45.4MiB/s (47.6MB/s), 8143KiB/s-15.3MiB/s (8339kB/s-16.0MB/s), io=45.6MiB (47.9MB), run=1003-1006msec 00:16:55.607 00:16:55.607 Disk stats (read/write): 00:16:55.607 nvme0n1: ios=1586/1840, merge=0/0, ticks=16322/17695, in_queue=34017, util=88.77% 00:16:55.607 nvme0n2: ios=3247/3584, merge=0/0, ticks=13071/13653, in_queue=26724, util=89.78% 00:16:55.607 nvme0n3: ios=3072/3525, merge=0/0, ticks=21989/23951, in_queue=45940, util=89.16% 00:16:55.607 nvme0n4: ios=1536/1742, merge=0/0, ticks=16189/17289, in_queue=33478, util=89.10% 00:16:55.607 04:00:30 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:55.607 [global] 00:16:55.607 thread=1 00:16:55.607 invalidate=1 00:16:55.607 rw=randwrite 00:16:55.607 time_based=1 00:16:55.607 runtime=1 00:16:55.607 ioengine=libaio 00:16:55.607 direct=1 00:16:55.607 bs=4096 00:16:55.607 iodepth=128 00:16:55.607 norandommap=0 00:16:55.607 numjobs=1 00:16:55.607 00:16:55.607 verify_dump=1 00:16:55.607 verify_backlog=512 00:16:55.607 verify_state_save=0 00:16:55.607 do_verify=1 00:16:55.607 verify=crc32c-intel 00:16:55.607 [job0] 00:16:55.607 filename=/dev/nvme0n1 00:16:55.607 [job1] 00:16:55.607 filename=/dev/nvme0n2 00:16:55.607 [job2] 00:16:55.607 filename=/dev/nvme0n3 00:16:55.608 [job3] 00:16:55.608 filename=/dev/nvme0n4 00:16:55.608 Could not set queue depth (nvme0n1) 00:16:55.608 Could not set queue depth (nvme0n2) 00:16:55.608 Could not set queue depth (nvme0n3) 00:16:55.608 Could not set queue depth (nvme0n4) 00:16:55.608 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:55.608 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:55.608 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:55.608 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:55.608 fio-3.35 00:16:55.608 Starting 4 threads 00:16:56.986 00:16:56.986 job0: (groupid=0, jobs=1): err= 0: pid=76824: Fri Nov 8 04:00:31 2024 00:16:56.986 read: IOPS=2619, BW=10.2MiB/s (10.7MB/s)(10.3MiB/1007msec) 00:16:56.986 slat (usec): min=4, max=12222, avg=173.92, stdev=946.57 00:16:56.986 clat (usec): min=6036, max=44025, avg=21481.45, stdev=7408.79 00:16:56.986 lat (usec): min=6055, max=45590, avg=21655.38, stdev=7508.61 00:16:56.986 clat percentiles (usec): 00:16:56.986 | 1.00th=[10028], 5.00th=[14353], 10.00th=[14746], 20.00th=[15664], 00:16:56.986 | 30.00th=[16319], 40.00th=[16909], 50.00th=[17695], 60.00th=[20579], 00:16:56.986 | 70.00th=[25035], 80.00th=[27919], 90.00th=[33817], 95.00th=[35914], 00:16:56.986 | 99.00th=[38536], 99.50th=[39584], 99.90th=[41681], 99.95th=[41681], 00:16:56.986 | 99.99th=[43779] 00:16:56.986 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:16:56.986 slat (usec): min=4, max=8731, avg=168.47, stdev=766.56 00:16:56.986 clat (usec): min=9002, max=44990, avg=22910.86, stdev=6806.45 00:16:56.986 lat (usec): min=9028, max=45008, avg=23079.33, stdev=6860.82 00:16:56.986 clat percentiles (usec): 00:16:56.986 | 1.00th=[12780], 5.00th=[15926], 10.00th=[16319], 20.00th=[16909], 00:16:56.986 | 30.00th=[17433], 40.00th=[17957], 50.00th=[19530], 60.00th=[24249], 00:16:56.986 | 70.00th=[27395], 80.00th=[30540], 90.00th=[33162], 95.00th=[34866], 00:16:56.986 | 99.00th=[38011], 99.50th=[38536], 99.90th=[40109], 99.95th=[44303], 00:16:56.986 | 99.99th=[44827] 00:16:56.986 bw ( KiB/s): min= 9192, max=14992, per=23.22%, avg=12092.00, stdev=4101.22, samples=2 00:16:56.986 iops : min= 2298, max= 3748, avg=3023.00, stdev=1025.30, samples=2 00:16:56.986 lat (msec) : 10=0.44%, 20=53.87%, 50=45.69% 00:16:56.986 cpu : usr=2.98%, sys=8.05%, ctx=646, majf=0, minf=16 00:16:56.986 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:16:56.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.986 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:56.986 issued rwts: total=2638,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:56.986 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:56.986 job1: (groupid=0, jobs=1): err= 0: pid=76825: Fri Nov 8 04:00:31 2024 00:16:56.986 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:16:56.986 slat (usec): min=5, max=14453, avg=187.63, stdev=1175.85 00:16:56.986 clat (usec): min=11377, max=43830, avg=23053.15, stdev=5943.06 00:16:56.986 lat (usec): min=11401, max=44149, avg=23240.78, stdev=6058.30 00:16:56.986 clat percentiles (usec): 00:16:56.986 | 1.00th=[14091], 5.00th=[16450], 10.00th=[16712], 20.00th=[17433], 00:16:56.986 | 30.00th=[17957], 40.00th=[18482], 50.00th=[22152], 60.00th=[25035], 00:16:56.986 | 70.00th=[26608], 80.00th=[28967], 90.00th=[30802], 95.00th=[33162], 00:16:56.986 | 99.00th=[36439], 99.50th=[38536], 99.90th=[42730], 99.95th=[42730], 00:16:56.986 | 99.99th=[43779] 00:16:56.986 write: IOPS=2849, BW=11.1MiB/s (11.7MB/s)(11.2MiB/1007msec); 0 zone resets 00:16:56.986 slat (usec): min=4, max=8875, avg=173.41, stdev=781.10 00:16:56.986 clat (usec): min=5744, max=40808, avg=23716.18, stdev=6470.38 00:16:56.986 lat (usec): min=7409, max=42547, avg=23889.59, stdev=6525.28 00:16:56.986 clat percentiles (usec): 00:16:56.986 | 1.00th=[11469], 5.00th=[15795], 10.00th=[17433], 20.00th=[18482], 00:16:56.986 | 30.00th=[19006], 40.00th=[19530], 50.00th=[20055], 60.00th=[25035], 00:16:56.986 | 70.00th=[28967], 80.00th=[30802], 90.00th=[32637], 95.00th=[34866], 00:16:56.986 | 99.00th=[36963], 99.50th=[38011], 99.90th=[39060], 99.95th=[40109], 00:16:56.986 | 99.99th=[40633] 00:16:56.986 bw ( KiB/s): min= 9160, max=12750, per=21.04%, avg=10955.00, stdev=2538.51, samples=2 00:16:56.986 iops : min= 2290, max= 3187, avg=2738.50, stdev=634.27, samples=2 00:16:56.986 lat (msec) : 10=0.11%, 20=46.45%, 50=53.44% 00:16:56.986 cpu : usr=3.08%, sys=7.46%, ctx=579, majf=0, minf=9 00:16:56.986 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:16:56.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.986 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:56.986 issued rwts: total=2560,2869,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:56.986 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:56.986 job2: (groupid=0, jobs=1): err= 0: pid=76826: Fri Nov 8 04:00:31 2024 00:16:56.986 read: IOPS=3375, BW=13.2MiB/s (13.8MB/s)(13.2MiB/1003msec) 00:16:56.986 slat (usec): min=6, max=8930, avg=144.81, stdev=937.41 00:16:56.986 clat (usec): min=540, max=29010, avg=17397.92, stdev=2221.35 00:16:56.986 lat (usec): min=8332, max=29025, avg=17542.73, stdev=2364.87 00:16:56.986 clat percentiles (usec): 00:16:56.986 | 1.00th=[ 8848], 5.00th=[13435], 10.00th=[16057], 20.00th=[16712], 00:16:56.986 | 30.00th=[16909], 40.00th=[17171], 50.00th=[17433], 60.00th=[17695], 00:16:56.986 | 70.00th=[17695], 80.00th=[18220], 90.00th=[19268], 95.00th=[20579], 00:16:56.987 | 99.00th=[25035], 99.50th=[25560], 99.90th=[26346], 99.95th=[26608], 00:16:56.987 | 99.99th=[28967] 00:16:56.987 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:16:56.987 slat (usec): min=13, max=8493, avg=134.55, stdev=716.18 00:16:56.987 clat (usec): min=9914, max=28057, avg=18850.89, stdev=2138.51 00:16:56.987 lat (usec): min=9941, max=28109, avg=18985.44, stdev=2158.32 00:16:56.987 clat percentiles (usec): 00:16:56.987 | 1.00th=[11469], 5.00th=[14484], 10.00th=[16909], 20.00th=[17957], 00:16:56.987 | 30.00th=[18744], 40.00th=[18744], 50.00th=[19006], 60.00th=[19268], 00:16:56.987 | 70.00th=[19530], 80.00th=[20055], 90.00th=[20317], 95.00th=[20841], 00:16:56.987 | 99.00th=[25297], 99.50th=[26084], 99.90th=[27395], 99.95th=[27657], 00:16:56.987 | 99.99th=[28181] 00:16:56.987 bw ( KiB/s): min=14152, max=14520, per=27.53%, avg=14336.00, stdev=260.22, samples=2 00:16:56.987 iops : min= 3538, max= 3630, avg=3584.00, stdev=65.05, samples=2 00:16:56.987 lat (usec) : 750=0.01% 00:16:56.987 lat (msec) : 10=0.98%, 20=86.30%, 50=12.71% 00:16:56.987 cpu : usr=2.59%, sys=11.08%, ctx=314, majf=0, minf=15 00:16:56.987 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:16:56.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.987 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:56.987 issued rwts: total=3386,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:56.987 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:56.987 job3: (groupid=0, jobs=1): err= 0: pid=76827: Fri Nov 8 04:00:31 2024 00:16:56.987 read: IOPS=3372, BW=13.2MiB/s (13.8MB/s)(13.2MiB/1002msec) 00:16:56.987 slat (usec): min=7, max=6826, avg=132.94, stdev=721.16 00:16:56.987 clat (usec): min=573, max=27231, avg=17673.26, stdev=2834.36 00:16:56.987 lat (usec): min=5452, max=27273, avg=17806.20, stdev=2845.39 00:16:56.987 clat percentiles (usec): 00:16:56.987 | 1.00th=[ 6194], 5.00th=[13435], 10.00th=[15401], 20.00th=[15926], 00:16:56.987 | 30.00th=[16319], 40.00th=[16909], 50.00th=[17433], 60.00th=[17695], 00:16:56.987 | 70.00th=[18744], 80.00th=[20317], 90.00th=[21103], 95.00th=[21890], 00:16:56.987 | 99.00th=[24511], 99.50th=[25822], 99.90th=[27132], 99.95th=[27132], 00:16:56.987 | 99.99th=[27132] 00:16:56.987 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:16:56.987 slat (usec): min=12, max=7447, avg=146.20, stdev=807.88 00:16:56.987 clat (usec): min=11801, max=24585, avg=18541.01, stdev=3147.04 00:16:56.987 lat (usec): min=11833, max=26940, avg=18687.21, stdev=3114.22 00:16:56.987 clat percentiles (usec): 00:16:56.987 | 1.00th=[12256], 5.00th=[13173], 10.00th=[13698], 20.00th=[16450], 00:16:56.987 | 30.00th=[17171], 40.00th=[17957], 50.00th=[18220], 60.00th=[18744], 00:16:56.987 | 70.00th=[19792], 80.00th=[22152], 90.00th=[22938], 95.00th=[23200], 00:16:56.987 | 99.00th=[24511], 99.50th=[24511], 99.90th=[24511], 99.95th=[24511], 00:16:56.987 | 99.99th=[24511] 00:16:56.987 bw ( KiB/s): min=12288, max=16384, per=27.53%, avg=14336.00, stdev=2896.31, samples=2 00:16:56.987 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:16:56.987 lat (usec) : 750=0.01% 00:16:56.987 lat (msec) : 10=0.60%, 20=72.90%, 50=26.48% 00:16:56.987 cpu : usr=3.40%, sys=9.89%, ctx=341, majf=0, minf=11 00:16:56.987 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:16:56.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.987 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:56.987 issued rwts: total=3379,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:56.987 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:56.987 00:16:56.987 Run status group 0 (all jobs): 00:16:56.987 READ: bw=46.4MiB/s (48.7MB/s), 9.93MiB/s-13.2MiB/s (10.4MB/s-13.8MB/s), io=46.7MiB (49.0MB), run=1002-1007msec 00:16:56.987 WRITE: bw=50.9MiB/s (53.3MB/s), 11.1MiB/s-14.0MiB/s (11.7MB/s-14.7MB/s), io=51.2MiB (53.7MB), run=1002-1007msec 00:16:56.987 00:16:56.987 Disk stats (read/write): 00:16:56.987 nvme0n1: ios=2437/2560, merge=0/0, ticks=24691/25176, in_queue=49867, util=88.08% 00:16:56.987 nvme0n2: ios=2223/2560, merge=0/0, ticks=22419/27628, in_queue=50047, util=88.66% 00:16:56.987 nvme0n3: ios=2816/3072, merge=0/0, ticks=23131/25738, in_queue=48869, util=88.95% 00:16:56.987 nvme0n4: ios=2734/3072, merge=0/0, ticks=15591/17154, in_queue=32745, util=89.60% 00:16:56.987 04:00:31 -- target/fio.sh@55 -- # sync 00:16:56.987 04:00:31 -- target/fio.sh@59 -- # fio_pid=76841 00:16:56.987 04:00:31 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:56.987 04:00:31 -- target/fio.sh@61 -- # sleep 3 00:16:56.987 [global] 00:16:56.987 thread=1 00:16:56.987 invalidate=1 00:16:56.987 rw=read 00:16:56.987 time_based=1 00:16:56.987 runtime=10 00:16:56.987 ioengine=libaio 00:16:56.987 direct=1 00:16:56.987 bs=4096 00:16:56.987 iodepth=1 00:16:56.987 norandommap=1 00:16:56.987 numjobs=1 00:16:56.987 00:16:56.987 [job0] 00:16:56.987 filename=/dev/nvme0n1 00:16:56.987 [job1] 00:16:56.987 filename=/dev/nvme0n2 00:16:56.987 [job2] 00:16:56.987 filename=/dev/nvme0n3 00:16:56.987 [job3] 00:16:56.987 filename=/dev/nvme0n4 00:16:56.987 Could not set queue depth (nvme0n1) 00:16:56.987 Could not set queue depth (nvme0n2) 00:16:56.987 Could not set queue depth (nvme0n3) 00:16:56.987 Could not set queue depth (nvme0n4) 00:16:56.987 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:56.987 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:56.987 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:56.987 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:56.987 fio-3.35 00:16:56.987 Starting 4 threads 00:17:00.304 04:00:34 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:00.304 fio: pid=76884, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:17:00.304 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=53170176, buflen=4096 00:17:00.304 04:00:35 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:00.304 fio: pid=76883, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:17:00.304 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=36413440, buflen=4096 00:17:00.304 04:00:35 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:00.304 04:00:35 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:00.563 fio: pid=76881, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:17:00.563 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=62648320, buflen=4096 00:17:00.563 04:00:35 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:00.563 04:00:35 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:00.822 fio: pid=76882, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:17:00.822 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=47611904, buflen=4096 00:17:00.822 00:17:00.822 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76881: Fri Nov 8 04:00:35 2024 00:17:00.822 read: IOPS=4533, BW=17.7MiB/s (18.6MB/s)(59.7MiB/3374msec) 00:17:00.822 slat (usec): min=12, max=15867, avg=17.67, stdev=189.31 00:17:00.822 clat (usec): min=130, max=2485, avg=201.63, stdev=42.59 00:17:00.822 lat (usec): min=145, max=16207, avg=219.30, stdev=195.32 00:17:00.822 clat percentiles (usec): 00:17:00.822 | 1.00th=[ 143], 5.00th=[ 153], 10.00th=[ 161], 20.00th=[ 172], 00:17:00.822 | 30.00th=[ 182], 40.00th=[ 192], 50.00th=[ 200], 60.00th=[ 208], 00:17:00.822 | 70.00th=[ 217], 80.00th=[ 227], 90.00th=[ 241], 95.00th=[ 255], 00:17:00.822 | 99.00th=[ 289], 99.50th=[ 306], 99.90th=[ 412], 99.95th=[ 611], 00:17:00.822 | 99.99th=[ 2073] 00:17:00.822 bw ( KiB/s): min=17040, max=18432, per=33.55%, avg=17884.00, stdev=483.42, samples=6 00:17:00.822 iops : min= 4260, max= 4608, avg=4471.00, stdev=120.86, samples=6 00:17:00.822 lat (usec) : 250=93.40%, 500=6.52%, 750=0.05%, 1000=0.01% 00:17:00.822 lat (msec) : 2=0.01%, 4=0.01% 00:17:00.822 cpu : usr=1.07%, sys=5.19%, ctx=15304, majf=0, minf=1 00:17:00.822 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:00.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:00.822 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:00.822 issued rwts: total=15296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:00.822 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:00.822 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76882: Fri Nov 8 04:00:35 2024 00:17:00.822 read: IOPS=3175, BW=12.4MiB/s (13.0MB/s)(45.4MiB/3661msec) 00:17:00.822 slat (usec): min=7, max=14406, avg=16.99, stdev=209.26 00:17:00.822 clat (usec): min=114, max=3267, avg=296.72, stdev=115.93 00:17:00.822 lat (usec): min=125, max=14590, avg=313.71, stdev=238.60 00:17:00.822 clat percentiles (usec): 00:17:00.822 | 1.00th=[ 126], 5.00th=[ 137], 10.00th=[ 143], 20.00th=[ 167], 00:17:00.822 | 30.00th=[ 233], 40.00th=[ 297], 50.00th=[ 322], 60.00th=[ 338], 00:17:00.822 | 70.00th=[ 355], 80.00th=[ 375], 90.00th=[ 404], 95.00th=[ 441], 00:17:00.822 | 99.00th=[ 529], 99.50th=[ 586], 99.90th=[ 1205], 99.95th=[ 1532], 00:17:00.822 | 99.99th=[ 2278] 00:17:00.822 bw ( KiB/s): min=10664, max=19476, per=23.20%, avg=12366.29, stdev=3151.19, samples=7 00:17:00.822 iops : min= 2666, max= 4869, avg=3091.57, stdev=787.80, samples=7 00:17:00.822 lat (usec) : 250=32.83%, 500=65.51%, 750=1.40%, 1000=0.06% 00:17:00.822 lat (msec) : 2=0.16%, 4=0.02% 00:17:00.822 cpu : usr=0.87%, sys=3.44%, ctx=11635, majf=0, minf=1 00:17:00.822 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:00.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:00.822 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:00.822 issued rwts: total=11625,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:00.822 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:00.822 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76883: Fri Nov 8 04:00:35 2024 00:17:00.822 read: IOPS=2819, BW=11.0MiB/s (11.5MB/s)(34.7MiB/3153msec) 00:17:00.822 slat (usec): min=7, max=8751, avg=14.22, stdev=119.68 00:17:00.822 clat (usec): min=142, max=2271, avg=338.99, stdev=82.87 00:17:00.822 lat (usec): min=153, max=9012, avg=353.21, stdev=144.38 00:17:00.822 clat percentiles (usec): 00:17:00.822 | 1.00th=[ 174], 5.00th=[ 210], 10.00th=[ 243], 20.00th=[ 285], 00:17:00.822 | 30.00th=[ 314], 40.00th=[ 330], 50.00th=[ 343], 60.00th=[ 355], 00:17:00.822 | 70.00th=[ 367], 80.00th=[ 383], 90.00th=[ 412], 95.00th=[ 449], 00:17:00.822 | 99.00th=[ 529], 99.50th=[ 578], 99.90th=[ 1188], 99.95th=[ 1532], 00:17:00.822 | 99.99th=[ 2278] 00:17:00.822 bw ( KiB/s): min=10664, max=11568, per=20.94%, avg=11160.00, stdev=334.32, samples=6 00:17:00.822 iops : min= 2666, max= 2892, avg=2790.00, stdev=83.58, samples=6 00:17:00.822 lat (usec) : 250=11.88%, 500=86.18%, 750=1.75%, 1000=0.06% 00:17:00.822 lat (msec) : 2=0.10%, 4=0.02% 00:17:00.822 cpu : usr=0.92%, sys=2.98%, ctx=8899, majf=0, minf=2 00:17:00.822 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:00.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:00.822 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:00.822 issued rwts: total=8891,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:00.822 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:00.822 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76884: Fri Nov 8 04:00:35 2024 00:17:00.822 read: IOPS=4445, BW=17.4MiB/s (18.2MB/s)(50.7MiB/2920msec) 00:17:00.822 slat (nsec): min=13205, max=85629, avg=15785.96, stdev=4732.12 00:17:00.822 clat (usec): min=142, max=1606, avg=207.79, stdev=25.82 00:17:00.822 lat (usec): min=157, max=1620, avg=223.57, stdev=26.50 00:17:00.822 clat percentiles (usec): 00:17:00.822 | 1.00th=[ 172], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 190], 00:17:00.822 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 210], 00:17:00.822 | 70.00th=[ 217], 80.00th=[ 223], 90.00th=[ 235], 95.00th=[ 247], 00:17:00.822 | 99.00th=[ 277], 99.50th=[ 293], 99.90th=[ 351], 99.95th=[ 429], 00:17:00.822 | 99.99th=[ 701] 00:17:00.822 bw ( KiB/s): min=16992, max=18208, per=33.31%, avg=17756.80, stdev=476.27, samples=5 00:17:00.822 iops : min= 4248, max= 4552, avg=4439.20, stdev=119.07, samples=5 00:17:00.822 lat (usec) : 250=95.98%, 500=3.97%, 750=0.03% 00:17:00.822 lat (msec) : 2=0.01% 00:17:00.822 cpu : usr=1.30%, sys=5.28%, ctx=12982, majf=0, minf=2 00:17:00.822 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:00.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:00.822 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:00.822 issued rwts: total=12982,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:00.822 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:00.822 00:17:00.822 Run status group 0 (all jobs): 00:17:00.822 READ: bw=52.1MiB/s (54.6MB/s), 11.0MiB/s-17.7MiB/s (11.5MB/s-18.6MB/s), io=191MiB (200MB), run=2920-3661msec 00:17:00.822 00:17:00.822 Disk stats (read/write): 00:17:00.822 nvme0n1: ios=15272/0, merge=0/0, ticks=3125/0, in_queue=3125, util=95.08% 00:17:00.822 nvme0n2: ios=11361/0, merge=0/0, ticks=3392/0, in_queue=3392, util=95.45% 00:17:00.822 nvme0n3: ios=8764/0, merge=0/0, ticks=2965/0, in_queue=2965, util=96.39% 00:17:00.822 nvme0n4: ios=12745/0, merge=0/0, ticks=2722/0, in_queue=2722, util=96.79% 00:17:00.822 04:00:35 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:00.822 04:00:35 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:01.390 04:00:36 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:01.390 04:00:36 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:01.649 04:00:36 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:01.649 04:00:36 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:01.908 04:00:36 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:01.908 04:00:36 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:02.166 04:00:37 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:02.166 04:00:37 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:02.425 04:00:37 -- target/fio.sh@69 -- # fio_status=0 00:17:02.425 04:00:37 -- target/fio.sh@70 -- # wait 76841 00:17:02.425 04:00:37 -- target/fio.sh@70 -- # fio_status=4 00:17:02.425 04:00:37 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:02.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:02.425 04:00:37 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:02.425 04:00:37 -- common/autotest_common.sh@1208 -- # local i=0 00:17:02.425 04:00:37 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:17:02.425 04:00:37 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:02.425 04:00:37 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:02.425 04:00:37 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:17:02.425 nvmf hotplug test: fio failed as expected 00:17:02.425 04:00:37 -- common/autotest_common.sh@1220 -- # return 0 00:17:02.425 04:00:37 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:02.425 04:00:37 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:02.425 04:00:37 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:02.685 04:00:37 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:02.685 04:00:37 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:02.685 04:00:37 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:02.685 04:00:37 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:02.685 04:00:37 -- target/fio.sh@91 -- # nvmftestfini 00:17:02.685 04:00:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:02.685 04:00:37 -- nvmf/common.sh@116 -- # sync 00:17:02.685 04:00:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:02.685 04:00:37 -- nvmf/common.sh@119 -- # set +e 00:17:02.685 04:00:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:02.685 04:00:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:02.685 rmmod nvme_tcp 00:17:02.685 rmmod nvme_fabrics 00:17:02.685 rmmod nvme_keyring 00:17:02.685 04:00:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:02.944 04:00:37 -- nvmf/common.sh@123 -- # set -e 00:17:02.944 04:00:37 -- nvmf/common.sh@124 -- # return 0 00:17:02.944 04:00:37 -- nvmf/common.sh@477 -- # '[' -n 76347 ']' 00:17:02.944 04:00:37 -- nvmf/common.sh@478 -- # killprocess 76347 00:17:02.944 04:00:37 -- common/autotest_common.sh@936 -- # '[' -z 76347 ']' 00:17:02.944 04:00:37 -- common/autotest_common.sh@940 -- # kill -0 76347 00:17:02.944 04:00:37 -- common/autotest_common.sh@941 -- # uname 00:17:02.944 04:00:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:02.944 04:00:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76347 00:17:02.944 killing process with pid 76347 00:17:02.944 04:00:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:02.944 04:00:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:02.944 04:00:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76347' 00:17:02.944 04:00:37 -- common/autotest_common.sh@955 -- # kill 76347 00:17:02.944 04:00:37 -- common/autotest_common.sh@960 -- # wait 76347 00:17:03.203 04:00:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:03.203 04:00:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:03.203 04:00:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:03.203 04:00:38 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:03.203 04:00:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:03.203 04:00:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.203 04:00:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:03.203 04:00:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.203 04:00:38 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:03.203 ************************************ 00:17:03.203 END TEST nvmf_fio_target 00:17:03.203 ************************************ 00:17:03.203 00:17:03.203 real 0m19.790s 00:17:03.203 user 1m16.160s 00:17:03.204 sys 0m7.727s 00:17:03.204 04:00:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:03.204 04:00:38 -- common/autotest_common.sh@10 -- # set +x 00:17:03.204 04:00:38 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:03.204 04:00:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:03.204 04:00:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:03.204 04:00:38 -- common/autotest_common.sh@10 -- # set +x 00:17:03.204 ************************************ 00:17:03.204 START TEST nvmf_bdevio 00:17:03.204 ************************************ 00:17:03.204 04:00:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:03.204 * Looking for test storage... 00:17:03.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:03.463 04:00:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:03.463 04:00:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:03.463 04:00:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:03.463 04:00:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:03.463 04:00:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:03.463 04:00:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:03.463 04:00:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:03.463 04:00:38 -- scripts/common.sh@335 -- # IFS=.-: 00:17:03.463 04:00:38 -- scripts/common.sh@335 -- # read -ra ver1 00:17:03.463 04:00:38 -- scripts/common.sh@336 -- # IFS=.-: 00:17:03.463 04:00:38 -- scripts/common.sh@336 -- # read -ra ver2 00:17:03.463 04:00:38 -- scripts/common.sh@337 -- # local 'op=<' 00:17:03.463 04:00:38 -- scripts/common.sh@339 -- # ver1_l=2 00:17:03.463 04:00:38 -- scripts/common.sh@340 -- # ver2_l=1 00:17:03.463 04:00:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:03.463 04:00:38 -- scripts/common.sh@343 -- # case "$op" in 00:17:03.463 04:00:38 -- scripts/common.sh@344 -- # : 1 00:17:03.463 04:00:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:03.463 04:00:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:03.463 04:00:38 -- scripts/common.sh@364 -- # decimal 1 00:17:03.463 04:00:38 -- scripts/common.sh@352 -- # local d=1 00:17:03.463 04:00:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:03.463 04:00:38 -- scripts/common.sh@354 -- # echo 1 00:17:03.463 04:00:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:03.463 04:00:38 -- scripts/common.sh@365 -- # decimal 2 00:17:03.463 04:00:38 -- scripts/common.sh@352 -- # local d=2 00:17:03.463 04:00:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:03.463 04:00:38 -- scripts/common.sh@354 -- # echo 2 00:17:03.463 04:00:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:03.463 04:00:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:03.463 04:00:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:03.463 04:00:38 -- scripts/common.sh@367 -- # return 0 00:17:03.463 04:00:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:03.463 04:00:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:03.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.463 --rc genhtml_branch_coverage=1 00:17:03.463 --rc genhtml_function_coverage=1 00:17:03.463 --rc genhtml_legend=1 00:17:03.463 --rc geninfo_all_blocks=1 00:17:03.463 --rc geninfo_unexecuted_blocks=1 00:17:03.463 00:17:03.463 ' 00:17:03.463 04:00:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:03.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.463 --rc genhtml_branch_coverage=1 00:17:03.463 --rc genhtml_function_coverage=1 00:17:03.463 --rc genhtml_legend=1 00:17:03.463 --rc geninfo_all_blocks=1 00:17:03.463 --rc geninfo_unexecuted_blocks=1 00:17:03.463 00:17:03.463 ' 00:17:03.463 04:00:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:03.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.464 --rc genhtml_branch_coverage=1 00:17:03.464 --rc genhtml_function_coverage=1 00:17:03.464 --rc genhtml_legend=1 00:17:03.464 --rc geninfo_all_blocks=1 00:17:03.464 --rc geninfo_unexecuted_blocks=1 00:17:03.464 00:17:03.464 ' 00:17:03.464 04:00:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:03.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.464 --rc genhtml_branch_coverage=1 00:17:03.464 --rc genhtml_function_coverage=1 00:17:03.464 --rc genhtml_legend=1 00:17:03.464 --rc geninfo_all_blocks=1 00:17:03.464 --rc geninfo_unexecuted_blocks=1 00:17:03.464 00:17:03.464 ' 00:17:03.464 04:00:38 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:03.464 04:00:38 -- nvmf/common.sh@7 -- # uname -s 00:17:03.464 04:00:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:03.464 04:00:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:03.464 04:00:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:03.464 04:00:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:03.464 04:00:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:03.464 04:00:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:03.464 04:00:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:03.464 04:00:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:03.464 04:00:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:03.464 04:00:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:03.464 04:00:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:17:03.464 04:00:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:17:03.464 04:00:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:03.464 04:00:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:03.464 04:00:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:03.464 04:00:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:03.464 04:00:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:03.464 04:00:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:03.464 04:00:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:03.464 04:00:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.464 04:00:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.464 04:00:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.464 04:00:38 -- paths/export.sh@5 -- # export PATH 00:17:03.464 04:00:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.464 04:00:38 -- nvmf/common.sh@46 -- # : 0 00:17:03.464 04:00:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:03.464 04:00:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:03.464 04:00:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:03.464 04:00:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:03.464 04:00:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:03.464 04:00:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:03.464 04:00:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:03.464 04:00:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:03.464 04:00:38 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:03.464 04:00:38 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:03.464 04:00:38 -- target/bdevio.sh@14 -- # nvmftestinit 00:17:03.464 04:00:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:03.464 04:00:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:03.464 04:00:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:03.464 04:00:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:03.464 04:00:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:03.464 04:00:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.464 04:00:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:03.464 04:00:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.464 04:00:38 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:03.464 04:00:38 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:03.464 04:00:38 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:03.464 04:00:38 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:03.464 04:00:38 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:03.464 04:00:38 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:03.464 04:00:38 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:03.464 04:00:38 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:03.464 04:00:38 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:03.464 04:00:38 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:03.464 04:00:38 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:03.464 04:00:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:03.464 04:00:38 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:03.464 04:00:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:03.464 04:00:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:03.464 04:00:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:03.464 04:00:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:03.464 04:00:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:03.464 04:00:38 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:03.464 04:00:38 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:03.464 Cannot find device "nvmf_tgt_br" 00:17:03.464 04:00:38 -- nvmf/common.sh@154 -- # true 00:17:03.464 04:00:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:03.464 Cannot find device "nvmf_tgt_br2" 00:17:03.464 04:00:38 -- nvmf/common.sh@155 -- # true 00:17:03.464 04:00:38 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:03.464 04:00:38 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:03.464 Cannot find device "nvmf_tgt_br" 00:17:03.464 04:00:38 -- nvmf/common.sh@157 -- # true 00:17:03.464 04:00:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:03.464 Cannot find device "nvmf_tgt_br2" 00:17:03.464 04:00:38 -- nvmf/common.sh@158 -- # true 00:17:03.464 04:00:38 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:03.464 04:00:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:03.723 04:00:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:03.723 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:03.723 04:00:38 -- nvmf/common.sh@161 -- # true 00:17:03.723 04:00:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:03.723 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:03.723 04:00:38 -- nvmf/common.sh@162 -- # true 00:17:03.723 04:00:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:03.723 04:00:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:03.723 04:00:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:03.723 04:00:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:03.723 04:00:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:03.723 04:00:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:03.724 04:00:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:03.724 04:00:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:03.724 04:00:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:03.724 04:00:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:03.724 04:00:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:03.724 04:00:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:03.724 04:00:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:03.724 04:00:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:03.724 04:00:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:03.724 04:00:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:03.724 04:00:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:03.724 04:00:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:03.724 04:00:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:03.724 04:00:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:03.724 04:00:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:03.724 04:00:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:03.724 04:00:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:03.724 04:00:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:03.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:03.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:17:03.724 00:17:03.724 --- 10.0.0.2 ping statistics --- 00:17:03.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.724 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:17:03.724 04:00:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:03.724 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:03.724 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:17:03.724 00:17:03.724 --- 10.0.0.3 ping statistics --- 00:17:03.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.724 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:17:03.724 04:00:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:03.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:03.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:17:03.724 00:17:03.724 --- 10.0.0.1 ping statistics --- 00:17:03.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.724 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:17:03.724 04:00:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:03.724 04:00:38 -- nvmf/common.sh@421 -- # return 0 00:17:03.724 04:00:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:03.724 04:00:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:03.724 04:00:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:03.724 04:00:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:03.724 04:00:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:03.724 04:00:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:03.724 04:00:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:03.724 04:00:38 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:03.724 04:00:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:03.724 04:00:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:03.724 04:00:38 -- common/autotest_common.sh@10 -- # set +x 00:17:03.724 04:00:38 -- nvmf/common.sh@469 -- # nvmfpid=77221 00:17:03.724 04:00:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:03.724 04:00:38 -- nvmf/common.sh@470 -- # waitforlisten 77221 00:17:03.724 04:00:38 -- common/autotest_common.sh@829 -- # '[' -z 77221 ']' 00:17:03.724 04:00:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.724 04:00:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:03.724 04:00:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.724 04:00:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:03.724 04:00:38 -- common/autotest_common.sh@10 -- # set +x 00:17:03.983 [2024-11-08 04:00:38.835221] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:03.983 [2024-11-08 04:00:38.835283] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:03.983 [2024-11-08 04:00:38.971330] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:03.983 [2024-11-08 04:00:39.064621] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:03.983 [2024-11-08 04:00:39.064800] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:03.983 [2024-11-08 04:00:39.064817] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:03.983 [2024-11-08 04:00:39.064829] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:03.983 [2024-11-08 04:00:39.064996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:03.983 [2024-11-08 04:00:39.065573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:03.983 [2024-11-08 04:00:39.065727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:03.983 [2024-11-08 04:00:39.065734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:04.919 04:00:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:04.919 04:00:39 -- common/autotest_common.sh@862 -- # return 0 00:17:04.919 04:00:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:04.919 04:00:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:04.919 04:00:39 -- common/autotest_common.sh@10 -- # set +x 00:17:04.919 04:00:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:04.919 04:00:39 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:04.919 04:00:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.919 04:00:39 -- common/autotest_common.sh@10 -- # set +x 00:17:04.919 [2024-11-08 04:00:39.866145] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:04.919 04:00:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.919 04:00:39 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:04.919 04:00:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.919 04:00:39 -- common/autotest_common.sh@10 -- # set +x 00:17:04.919 Malloc0 00:17:04.920 04:00:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.920 04:00:39 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:04.920 04:00:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.920 04:00:39 -- common/autotest_common.sh@10 -- # set +x 00:17:04.920 04:00:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.920 04:00:39 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:04.920 04:00:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.920 04:00:39 -- common/autotest_common.sh@10 -- # set +x 00:17:04.920 04:00:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.920 04:00:39 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:04.920 04:00:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.920 04:00:39 -- common/autotest_common.sh@10 -- # set +x 00:17:04.920 [2024-11-08 04:00:39.941544] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:04.920 04:00:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.920 04:00:39 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:04.920 04:00:39 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:04.920 04:00:39 -- nvmf/common.sh@520 -- # config=() 00:17:04.920 04:00:39 -- nvmf/common.sh@520 -- # local subsystem config 00:17:04.920 04:00:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:04.920 04:00:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:04.920 { 00:17:04.920 "params": { 00:17:04.920 "name": "Nvme$subsystem", 00:17:04.920 "trtype": "$TEST_TRANSPORT", 00:17:04.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:04.920 "adrfam": "ipv4", 00:17:04.920 "trsvcid": "$NVMF_PORT", 00:17:04.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:04.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:04.920 "hdgst": ${hdgst:-false}, 00:17:04.920 "ddgst": ${ddgst:-false} 00:17:04.920 }, 00:17:04.920 "method": "bdev_nvme_attach_controller" 00:17:04.920 } 00:17:04.920 EOF 00:17:04.920 )") 00:17:04.920 04:00:39 -- nvmf/common.sh@542 -- # cat 00:17:04.920 04:00:39 -- nvmf/common.sh@544 -- # jq . 00:17:04.920 04:00:39 -- nvmf/common.sh@545 -- # IFS=, 00:17:04.920 04:00:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:04.920 "params": { 00:17:04.920 "name": "Nvme1", 00:17:04.920 "trtype": "tcp", 00:17:04.920 "traddr": "10.0.0.2", 00:17:04.920 "adrfam": "ipv4", 00:17:04.920 "trsvcid": "4420", 00:17:04.920 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:04.920 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:04.920 "hdgst": false, 00:17:04.920 "ddgst": false 00:17:04.920 }, 00:17:04.920 "method": "bdev_nvme_attach_controller" 00:17:04.920 }' 00:17:04.920 [2024-11-08 04:00:40.006860] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:04.920 [2024-11-08 04:00:40.006960] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77275 ] 00:17:05.179 [2024-11-08 04:00:40.150629] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:05.179 [2024-11-08 04:00:40.261300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.179 [2024-11-08 04:00:40.261514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:05.179 [2024-11-08 04:00:40.261797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.438 [2024-11-08 04:00:40.467502] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:17:05.438 [2024-11-08 04:00:40.467567] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:17:05.438 I/O targets: 00:17:05.438 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:05.438 00:17:05.438 00:17:05.438 CUnit - A unit testing framework for C - Version 2.1-3 00:17:05.438 http://cunit.sourceforge.net/ 00:17:05.438 00:17:05.438 00:17:05.438 Suite: bdevio tests on: Nvme1n1 00:17:05.438 Test: blockdev write read block ...passed 00:17:05.697 Test: blockdev write zeroes read block ...passed 00:17:05.697 Test: blockdev write zeroes read no split ...passed 00:17:05.697 Test: blockdev write zeroes read split ...passed 00:17:05.697 Test: blockdev write zeroes read split partial ...passed 00:17:05.697 Test: blockdev reset ...[2024-11-08 04:00:40.585987] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:05.697 [2024-11-08 04:00:40.586083] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1144910 (9): Bad file descriptor 00:17:05.697 [2024-11-08 04:00:40.598139] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:05.697 passed 00:17:05.698 Test: blockdev write read 8 blocks ...passed 00:17:05.698 Test: blockdev write read size > 128k ...passed 00:17:05.698 Test: blockdev write read invalid size ...passed 00:17:05.698 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:05.698 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:05.698 Test: blockdev write read max offset ...passed 00:17:05.698 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:05.698 Test: blockdev writev readv 8 blocks ...passed 00:17:05.698 Test: blockdev writev readv 30 x 1block ...passed 00:17:05.698 Test: blockdev writev readv block ...passed 00:17:05.698 Test: blockdev writev readv size > 128k ...passed 00:17:05.698 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:05.698 Test: blockdev comparev and writev ...[2024-11-08 04:00:40.772166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:05.698 [2024-11-08 04:00:40.772218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:05.698 [2024-11-08 04:00:40.772237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:05.698 [2024-11-08 04:00:40.772247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.698 [2024-11-08 04:00:40.772827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:05.698 [2024-11-08 04:00:40.772914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:05.698 [2024-11-08 04:00:40.772930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:05.698 [2024-11-08 04:00:40.772940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:05.698 [2024-11-08 04:00:40.773348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:05.698 [2024-11-08 04:00:40.773375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:05.698 [2024-11-08 04:00:40.773391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:05.698 [2024-11-08 04:00:40.773400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:05.698 [2024-11-08 04:00:40.774047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:05.698 [2024-11-08 04:00:40.774104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:05.698 [2024-11-08 04:00:40.774120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:05.698 [2024-11-08 04:00:40.774130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:05.957 passed 00:17:05.957 Test: blockdev nvme passthru rw ...passed 00:17:05.957 Test: blockdev nvme passthru vendor specific ...[2024-11-08 04:00:40.857783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:05.957 [2024-11-08 04:00:40.857808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:05.957 [2024-11-08 04:00:40.857955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:05.957 [2024-11-08 04:00:40.857975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:05.957 [2024-11-08 04:00:40.858088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:05.957 [2024-11-08 04:00:40.858107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:05.957 [2024-11-08 04:00:40.858211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:05.957 [2024-11-08 04:00:40.858230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:05.957 passed 00:17:05.957 Test: blockdev nvme admin passthru ...passed 00:17:05.957 Test: blockdev copy ...passed 00:17:05.957 00:17:05.957 Run Summary: Type Total Ran Passed Failed Inactive 00:17:05.957 suites 1 1 n/a 0 0 00:17:05.957 tests 23 23 23 0 0 00:17:05.957 asserts 152 152 152 0 n/a 00:17:05.957 00:17:05.957 Elapsed time = 0.900 seconds 00:17:06.216 04:00:41 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:06.216 04:00:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.216 04:00:41 -- common/autotest_common.sh@10 -- # set +x 00:17:06.216 04:00:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.216 04:00:41 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:06.216 04:00:41 -- target/bdevio.sh@30 -- # nvmftestfini 00:17:06.216 04:00:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:06.216 04:00:41 -- nvmf/common.sh@116 -- # sync 00:17:06.216 04:00:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:06.216 04:00:41 -- nvmf/common.sh@119 -- # set +e 00:17:06.216 04:00:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:06.216 04:00:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:06.216 rmmod nvme_tcp 00:17:06.216 rmmod nvme_fabrics 00:17:06.216 rmmod nvme_keyring 00:17:06.216 04:00:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:06.216 04:00:41 -- nvmf/common.sh@123 -- # set -e 00:17:06.216 04:00:41 -- nvmf/common.sh@124 -- # return 0 00:17:06.216 04:00:41 -- nvmf/common.sh@477 -- # '[' -n 77221 ']' 00:17:06.216 04:00:41 -- nvmf/common.sh@478 -- # killprocess 77221 00:17:06.216 04:00:41 -- common/autotest_common.sh@936 -- # '[' -z 77221 ']' 00:17:06.216 04:00:41 -- common/autotest_common.sh@940 -- # kill -0 77221 00:17:06.216 04:00:41 -- common/autotest_common.sh@941 -- # uname 00:17:06.216 04:00:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:06.216 04:00:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77221 00:17:06.475 killing process with pid 77221 00:17:06.475 04:00:41 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:17:06.475 04:00:41 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:17:06.475 04:00:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77221' 00:17:06.475 04:00:41 -- common/autotest_common.sh@955 -- # kill 77221 00:17:06.475 04:00:41 -- common/autotest_common.sh@960 -- # wait 77221 00:17:06.734 04:00:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:06.734 04:00:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:06.734 04:00:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:06.734 04:00:41 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:06.734 04:00:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:06.734 04:00:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.734 04:00:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.734 04:00:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.734 04:00:41 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:06.734 00:17:06.734 real 0m3.491s 00:17:06.734 user 0m12.517s 00:17:06.734 sys 0m0.907s 00:17:06.734 04:00:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:06.734 04:00:41 -- common/autotest_common.sh@10 -- # set +x 00:17:06.734 ************************************ 00:17:06.734 END TEST nvmf_bdevio 00:17:06.734 ************************************ 00:17:06.734 04:00:41 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:17:06.734 04:00:41 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:06.734 04:00:41 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:06.734 04:00:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:06.734 04:00:41 -- common/autotest_common.sh@10 -- # set +x 00:17:06.734 ************************************ 00:17:06.734 START TEST nvmf_bdevio_no_huge 00:17:06.734 ************************************ 00:17:06.734 04:00:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:06.994 * Looking for test storage... 00:17:06.994 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:06.994 04:00:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:06.994 04:00:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:06.994 04:00:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:06.994 04:00:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:06.994 04:00:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:06.994 04:00:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:06.994 04:00:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:06.994 04:00:41 -- scripts/common.sh@335 -- # IFS=.-: 00:17:06.994 04:00:41 -- scripts/common.sh@335 -- # read -ra ver1 00:17:06.994 04:00:41 -- scripts/common.sh@336 -- # IFS=.-: 00:17:06.994 04:00:41 -- scripts/common.sh@336 -- # read -ra ver2 00:17:06.994 04:00:41 -- scripts/common.sh@337 -- # local 'op=<' 00:17:06.994 04:00:41 -- scripts/common.sh@339 -- # ver1_l=2 00:17:06.994 04:00:41 -- scripts/common.sh@340 -- # ver2_l=1 00:17:06.994 04:00:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:06.994 04:00:41 -- scripts/common.sh@343 -- # case "$op" in 00:17:06.994 04:00:41 -- scripts/common.sh@344 -- # : 1 00:17:06.994 04:00:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:06.994 04:00:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:06.994 04:00:41 -- scripts/common.sh@364 -- # decimal 1 00:17:06.994 04:00:41 -- scripts/common.sh@352 -- # local d=1 00:17:06.994 04:00:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:06.994 04:00:41 -- scripts/common.sh@354 -- # echo 1 00:17:06.994 04:00:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:06.994 04:00:41 -- scripts/common.sh@365 -- # decimal 2 00:17:06.994 04:00:41 -- scripts/common.sh@352 -- # local d=2 00:17:06.994 04:00:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:06.994 04:00:41 -- scripts/common.sh@354 -- # echo 2 00:17:06.994 04:00:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:06.994 04:00:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:06.994 04:00:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:06.994 04:00:41 -- scripts/common.sh@367 -- # return 0 00:17:06.994 04:00:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:06.994 04:00:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:06.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.994 --rc genhtml_branch_coverage=1 00:17:06.994 --rc genhtml_function_coverage=1 00:17:06.994 --rc genhtml_legend=1 00:17:06.994 --rc geninfo_all_blocks=1 00:17:06.994 --rc geninfo_unexecuted_blocks=1 00:17:06.994 00:17:06.994 ' 00:17:06.994 04:00:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:06.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.994 --rc genhtml_branch_coverage=1 00:17:06.994 --rc genhtml_function_coverage=1 00:17:06.994 --rc genhtml_legend=1 00:17:06.994 --rc geninfo_all_blocks=1 00:17:06.994 --rc geninfo_unexecuted_blocks=1 00:17:06.994 00:17:06.994 ' 00:17:06.994 04:00:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:06.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.994 --rc genhtml_branch_coverage=1 00:17:06.994 --rc genhtml_function_coverage=1 00:17:06.994 --rc genhtml_legend=1 00:17:06.994 --rc geninfo_all_blocks=1 00:17:06.994 --rc geninfo_unexecuted_blocks=1 00:17:06.994 00:17:06.994 ' 00:17:06.994 04:00:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:06.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.994 --rc genhtml_branch_coverage=1 00:17:06.994 --rc genhtml_function_coverage=1 00:17:06.994 --rc genhtml_legend=1 00:17:06.994 --rc geninfo_all_blocks=1 00:17:06.994 --rc geninfo_unexecuted_blocks=1 00:17:06.994 00:17:06.994 ' 00:17:06.994 04:00:41 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:06.994 04:00:41 -- nvmf/common.sh@7 -- # uname -s 00:17:06.994 04:00:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:06.994 04:00:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:06.994 04:00:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:06.994 04:00:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:06.994 04:00:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:06.994 04:00:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:06.994 04:00:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:06.994 04:00:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:06.994 04:00:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:06.994 04:00:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:06.994 04:00:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:17:06.994 04:00:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:17:06.994 04:00:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:06.994 04:00:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:06.994 04:00:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:06.994 04:00:41 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:06.995 04:00:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:06.995 04:00:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:06.995 04:00:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:06.995 04:00:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.995 04:00:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.995 04:00:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.995 04:00:41 -- paths/export.sh@5 -- # export PATH 00:17:06.995 04:00:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.995 04:00:41 -- nvmf/common.sh@46 -- # : 0 00:17:06.995 04:00:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:06.995 04:00:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:06.995 04:00:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:06.995 04:00:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:06.995 04:00:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:06.995 04:00:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:06.995 04:00:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:06.995 04:00:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:06.995 04:00:41 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:06.995 04:00:41 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:06.995 04:00:41 -- target/bdevio.sh@14 -- # nvmftestinit 00:17:06.995 04:00:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:06.995 04:00:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:06.995 04:00:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:06.995 04:00:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:06.995 04:00:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:06.995 04:00:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.995 04:00:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.995 04:00:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.995 04:00:41 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:06.995 04:00:41 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:06.995 04:00:41 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:06.995 04:00:41 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:06.995 04:00:41 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:06.995 04:00:41 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:06.995 04:00:41 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:06.995 04:00:41 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:06.995 04:00:41 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:06.995 04:00:41 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:06.995 04:00:41 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:06.995 04:00:41 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:06.995 04:00:41 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:06.995 04:00:41 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:06.995 04:00:41 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:06.995 04:00:41 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:06.995 04:00:41 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:06.995 04:00:41 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:06.995 04:00:41 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:06.995 04:00:42 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:06.995 Cannot find device "nvmf_tgt_br" 00:17:06.995 04:00:42 -- nvmf/common.sh@154 -- # true 00:17:06.995 04:00:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:06.995 Cannot find device "nvmf_tgt_br2" 00:17:06.995 04:00:42 -- nvmf/common.sh@155 -- # true 00:17:06.995 04:00:42 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:06.995 04:00:42 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:06.995 Cannot find device "nvmf_tgt_br" 00:17:06.995 04:00:42 -- nvmf/common.sh@157 -- # true 00:17:06.995 04:00:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:06.995 Cannot find device "nvmf_tgt_br2" 00:17:06.995 04:00:42 -- nvmf/common.sh@158 -- # true 00:17:06.995 04:00:42 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:06.995 04:00:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:07.254 04:00:42 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:07.254 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:07.254 04:00:42 -- nvmf/common.sh@161 -- # true 00:17:07.254 04:00:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:07.254 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:07.254 04:00:42 -- nvmf/common.sh@162 -- # true 00:17:07.254 04:00:42 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:07.254 04:00:42 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:07.254 04:00:42 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:07.254 04:00:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:07.254 04:00:42 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:07.254 04:00:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:07.254 04:00:42 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:07.254 04:00:42 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:07.254 04:00:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:07.254 04:00:42 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:07.254 04:00:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:07.254 04:00:42 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:07.254 04:00:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:07.254 04:00:42 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:07.254 04:00:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:07.254 04:00:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:07.254 04:00:42 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:07.254 04:00:42 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:07.254 04:00:42 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:07.254 04:00:42 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:07.254 04:00:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:07.254 04:00:42 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:07.254 04:00:42 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:07.254 04:00:42 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:07.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:07.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:17:07.254 00:17:07.255 --- 10.0.0.2 ping statistics --- 00:17:07.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.255 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:17:07.255 04:00:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:07.255 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:07.255 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:17:07.255 00:17:07.255 --- 10.0.0.3 ping statistics --- 00:17:07.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.255 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:17:07.255 04:00:42 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:07.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:07.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:17:07.255 00:17:07.255 --- 10.0.0.1 ping statistics --- 00:17:07.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.255 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:17:07.255 04:00:42 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:07.255 04:00:42 -- nvmf/common.sh@421 -- # return 0 00:17:07.255 04:00:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:07.255 04:00:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:07.255 04:00:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:07.255 04:00:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:07.255 04:00:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:07.255 04:00:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:07.255 04:00:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:07.255 04:00:42 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:07.255 04:00:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:07.255 04:00:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:07.255 04:00:42 -- common/autotest_common.sh@10 -- # set +x 00:17:07.255 04:00:42 -- nvmf/common.sh@469 -- # nvmfpid=77470 00:17:07.255 04:00:42 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:07.255 04:00:42 -- nvmf/common.sh@470 -- # waitforlisten 77470 00:17:07.255 04:00:42 -- common/autotest_common.sh@829 -- # '[' -z 77470 ']' 00:17:07.255 04:00:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.255 04:00:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:07.255 04:00:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.255 04:00:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:07.255 04:00:42 -- common/autotest_common.sh@10 -- # set +x 00:17:07.513 [2024-11-08 04:00:42.398482] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:07.513 [2024-11-08 04:00:42.398567] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:07.513 [2024-11-08 04:00:42.548349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:07.772 [2024-11-08 04:00:42.664693] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:07.772 [2024-11-08 04:00:42.664820] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:07.772 [2024-11-08 04:00:42.664833] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:07.772 [2024-11-08 04:00:42.664841] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:07.772 [2024-11-08 04:00:42.665013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:07.772 [2024-11-08 04:00:42.666183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:07.772 [2024-11-08 04:00:42.666511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:07.772 [2024-11-08 04:00:42.666522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:08.338 04:00:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:08.338 04:00:43 -- common/autotest_common.sh@862 -- # return 0 00:17:08.338 04:00:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:08.338 04:00:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:08.338 04:00:43 -- common/autotest_common.sh@10 -- # set +x 00:17:08.596 04:00:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.596 04:00:43 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:08.596 04:00:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.596 04:00:43 -- common/autotest_common.sh@10 -- # set +x 00:17:08.596 [2024-11-08 04:00:43.468181] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:08.596 04:00:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.596 04:00:43 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:08.596 04:00:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.596 04:00:43 -- common/autotest_common.sh@10 -- # set +x 00:17:08.596 Malloc0 00:17:08.596 04:00:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.596 04:00:43 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:08.596 04:00:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.596 04:00:43 -- common/autotest_common.sh@10 -- # set +x 00:17:08.596 04:00:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.596 04:00:43 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:08.596 04:00:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.596 04:00:43 -- common/autotest_common.sh@10 -- # set +x 00:17:08.596 04:00:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.597 04:00:43 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:08.597 04:00:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.597 04:00:43 -- common/autotest_common.sh@10 -- # set +x 00:17:08.597 [2024-11-08 04:00:43.510708] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:08.597 04:00:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.597 04:00:43 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:08.597 04:00:43 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:08.597 04:00:43 -- nvmf/common.sh@520 -- # config=() 00:17:08.597 04:00:43 -- nvmf/common.sh@520 -- # local subsystem config 00:17:08.597 04:00:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:08.597 04:00:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:08.597 { 00:17:08.597 "params": { 00:17:08.597 "name": "Nvme$subsystem", 00:17:08.597 "trtype": "$TEST_TRANSPORT", 00:17:08.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:08.597 "adrfam": "ipv4", 00:17:08.597 "trsvcid": "$NVMF_PORT", 00:17:08.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:08.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:08.597 "hdgst": ${hdgst:-false}, 00:17:08.597 "ddgst": ${ddgst:-false} 00:17:08.597 }, 00:17:08.597 "method": "bdev_nvme_attach_controller" 00:17:08.597 } 00:17:08.597 EOF 00:17:08.597 )") 00:17:08.597 04:00:43 -- nvmf/common.sh@542 -- # cat 00:17:08.597 04:00:43 -- nvmf/common.sh@544 -- # jq . 00:17:08.597 04:00:43 -- nvmf/common.sh@545 -- # IFS=, 00:17:08.597 04:00:43 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:08.597 "params": { 00:17:08.597 "name": "Nvme1", 00:17:08.597 "trtype": "tcp", 00:17:08.597 "traddr": "10.0.0.2", 00:17:08.597 "adrfam": "ipv4", 00:17:08.597 "trsvcid": "4420", 00:17:08.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:08.597 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:08.597 "hdgst": false, 00:17:08.597 "ddgst": false 00:17:08.597 }, 00:17:08.597 "method": "bdev_nvme_attach_controller" 00:17:08.597 }' 00:17:08.597 [2024-11-08 04:00:43.572293] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:08.597 [2024-11-08 04:00:43.572376] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid77524 ] 00:17:08.855 [2024-11-08 04:00:43.720216] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:08.855 [2024-11-08 04:00:43.878902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:08.855 [2024-11-08 04:00:43.879061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:08.855 [2024-11-08 04:00:43.879063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.113 [2024-11-08 04:00:44.050209] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:17:09.113 [2024-11-08 04:00:44.050249] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:17:09.113 I/O targets: 00:17:09.113 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:09.113 00:17:09.113 00:17:09.113 CUnit - A unit testing framework for C - Version 2.1-3 00:17:09.113 http://cunit.sourceforge.net/ 00:17:09.113 00:17:09.113 00:17:09.113 Suite: bdevio tests on: Nvme1n1 00:17:09.113 Test: blockdev write read block ...passed 00:17:09.113 Test: blockdev write zeroes read block ...passed 00:17:09.113 Test: blockdev write zeroes read no split ...passed 00:17:09.113 Test: blockdev write zeroes read split ...passed 00:17:09.113 Test: blockdev write zeroes read split partial ...passed 00:17:09.113 Test: blockdev reset ...[2024-11-08 04:00:44.178084] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:09.113 [2024-11-08 04:00:44.178193] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139a1c0 (9): Bad file descriptor 00:17:09.113 [2024-11-08 04:00:44.189406] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:09.113 passed 00:17:09.113 Test: blockdev write read 8 blocks ...passed 00:17:09.113 Test: blockdev write read size > 128k ...passed 00:17:09.114 Test: blockdev write read invalid size ...passed 00:17:09.373 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:09.373 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:09.373 Test: blockdev write read max offset ...passed 00:17:09.373 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:09.373 Test: blockdev writev readv 8 blocks ...passed 00:17:09.373 Test: blockdev writev readv 30 x 1block ...passed 00:17:09.373 Test: blockdev writev readv block ...passed 00:17:09.373 Test: blockdev writev readv size > 128k ...passed 00:17:09.373 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:09.373 Test: blockdev comparev and writev ...[2024-11-08 04:00:44.364165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:09.373 [2024-11-08 04:00:44.364225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:09.373 [2024-11-08 04:00:44.364252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:09.373 [2024-11-08 04:00:44.364263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:09.373 [2024-11-08 04:00:44.364675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:09.373 [2024-11-08 04:00:44.364700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:09.373 [2024-11-08 04:00:44.364717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:09.373 [2024-11-08 04:00:44.364728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:09.373 [2024-11-08 04:00:44.365318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:09.373 [2024-11-08 04:00:44.365361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:09.373 [2024-11-08 04:00:44.365393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:09.373 [2024-11-08 04:00:44.365403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:09.373 [2024-11-08 04:00:44.365827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:09.373 [2024-11-08 04:00:44.365856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:09.373 [2024-11-08 04:00:44.365873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:09.373 [2024-11-08 04:00:44.365884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:09.373 passed 00:17:09.373 Test: blockdev nvme passthru rw ...passed 00:17:09.373 Test: blockdev nvme passthru vendor specific ...[2024-11-08 04:00:44.448766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:09.373 [2024-11-08 04:00:44.448793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:09.373 [2024-11-08 04:00:44.448923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:09.373 [2024-11-08 04:00:44.448939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:09.373 [2024-11-08 04:00:44.449037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:09.373 [2024-11-08 04:00:44.449052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:09.373 [2024-11-08 04:00:44.449157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:09.373 [2024-11-08 04:00:44.449172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:09.373 passed 00:17:09.373 Test: blockdev nvme admin passthru ...passed 00:17:09.632 Test: blockdev copy ...passed 00:17:09.632 00:17:09.632 Run Summary: Type Total Ran Passed Failed Inactive 00:17:09.632 suites 1 1 n/a 0 0 00:17:09.632 tests 23 23 23 0 0 00:17:09.632 asserts 152 152 152 0 n/a 00:17:09.632 00:17:09.632 Elapsed time = 0.919 seconds 00:17:09.890 04:00:44 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:09.890 04:00:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.890 04:00:44 -- common/autotest_common.sh@10 -- # set +x 00:17:09.890 04:00:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.890 04:00:44 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:09.890 04:00:44 -- target/bdevio.sh@30 -- # nvmftestfini 00:17:09.890 04:00:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:09.890 04:00:44 -- nvmf/common.sh@116 -- # sync 00:17:10.148 04:00:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:10.148 04:00:45 -- nvmf/common.sh@119 -- # set +e 00:17:10.148 04:00:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:10.148 04:00:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:10.148 rmmod nvme_tcp 00:17:10.148 rmmod nvme_fabrics 00:17:10.148 rmmod nvme_keyring 00:17:10.148 04:00:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:10.148 04:00:45 -- nvmf/common.sh@123 -- # set -e 00:17:10.148 04:00:45 -- nvmf/common.sh@124 -- # return 0 00:17:10.148 04:00:45 -- nvmf/common.sh@477 -- # '[' -n 77470 ']' 00:17:10.148 04:00:45 -- nvmf/common.sh@478 -- # killprocess 77470 00:17:10.148 04:00:45 -- common/autotest_common.sh@936 -- # '[' -z 77470 ']' 00:17:10.148 04:00:45 -- common/autotest_common.sh@940 -- # kill -0 77470 00:17:10.148 04:00:45 -- common/autotest_common.sh@941 -- # uname 00:17:10.148 04:00:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:10.148 04:00:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77470 00:17:10.148 04:00:45 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:17:10.148 04:00:45 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:17:10.148 killing process with pid 77470 00:17:10.148 04:00:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77470' 00:17:10.148 04:00:45 -- common/autotest_common.sh@955 -- # kill 77470 00:17:10.148 04:00:45 -- common/autotest_common.sh@960 -- # wait 77470 00:17:10.716 04:00:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:10.716 04:00:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:10.716 04:00:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:10.716 04:00:45 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:10.716 04:00:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:10.716 04:00:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.716 04:00:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:10.716 04:00:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.716 04:00:45 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:10.716 ************************************ 00:17:10.716 END TEST nvmf_bdevio_no_huge 00:17:10.716 ************************************ 00:17:10.716 00:17:10.716 real 0m3.849s 00:17:10.716 user 0m13.683s 00:17:10.716 sys 0m1.376s 00:17:10.716 04:00:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:10.716 04:00:45 -- common/autotest_common.sh@10 -- # set +x 00:17:10.716 04:00:45 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:10.716 04:00:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:10.716 04:00:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:10.716 04:00:45 -- common/autotest_common.sh@10 -- # set +x 00:17:10.716 ************************************ 00:17:10.716 START TEST nvmf_tls 00:17:10.716 ************************************ 00:17:10.716 04:00:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:10.716 * Looking for test storage... 00:17:10.716 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:10.716 04:00:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:10.716 04:00:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:10.716 04:00:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:10.975 04:00:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:10.975 04:00:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:10.975 04:00:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:10.975 04:00:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:10.975 04:00:45 -- scripts/common.sh@335 -- # IFS=.-: 00:17:10.975 04:00:45 -- scripts/common.sh@335 -- # read -ra ver1 00:17:10.975 04:00:45 -- scripts/common.sh@336 -- # IFS=.-: 00:17:10.975 04:00:45 -- scripts/common.sh@336 -- # read -ra ver2 00:17:10.975 04:00:45 -- scripts/common.sh@337 -- # local 'op=<' 00:17:10.975 04:00:45 -- scripts/common.sh@339 -- # ver1_l=2 00:17:10.975 04:00:45 -- scripts/common.sh@340 -- # ver2_l=1 00:17:10.975 04:00:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:10.975 04:00:45 -- scripts/common.sh@343 -- # case "$op" in 00:17:10.975 04:00:45 -- scripts/common.sh@344 -- # : 1 00:17:10.975 04:00:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:10.975 04:00:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:10.975 04:00:45 -- scripts/common.sh@364 -- # decimal 1 00:17:10.975 04:00:45 -- scripts/common.sh@352 -- # local d=1 00:17:10.975 04:00:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:10.975 04:00:45 -- scripts/common.sh@354 -- # echo 1 00:17:10.975 04:00:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:10.975 04:00:45 -- scripts/common.sh@365 -- # decimal 2 00:17:10.975 04:00:45 -- scripts/common.sh@352 -- # local d=2 00:17:10.975 04:00:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:10.975 04:00:45 -- scripts/common.sh@354 -- # echo 2 00:17:10.975 04:00:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:10.975 04:00:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:10.975 04:00:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:10.975 04:00:45 -- scripts/common.sh@367 -- # return 0 00:17:10.975 04:00:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:10.975 04:00:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:10.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.975 --rc genhtml_branch_coverage=1 00:17:10.975 --rc genhtml_function_coverage=1 00:17:10.975 --rc genhtml_legend=1 00:17:10.975 --rc geninfo_all_blocks=1 00:17:10.975 --rc geninfo_unexecuted_blocks=1 00:17:10.975 00:17:10.975 ' 00:17:10.975 04:00:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:10.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.975 --rc genhtml_branch_coverage=1 00:17:10.975 --rc genhtml_function_coverage=1 00:17:10.975 --rc genhtml_legend=1 00:17:10.975 --rc geninfo_all_blocks=1 00:17:10.975 --rc geninfo_unexecuted_blocks=1 00:17:10.975 00:17:10.975 ' 00:17:10.975 04:00:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:10.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.975 --rc genhtml_branch_coverage=1 00:17:10.975 --rc genhtml_function_coverage=1 00:17:10.975 --rc genhtml_legend=1 00:17:10.975 --rc geninfo_all_blocks=1 00:17:10.975 --rc geninfo_unexecuted_blocks=1 00:17:10.975 00:17:10.975 ' 00:17:10.975 04:00:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:10.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.975 --rc genhtml_branch_coverage=1 00:17:10.975 --rc genhtml_function_coverage=1 00:17:10.975 --rc genhtml_legend=1 00:17:10.975 --rc geninfo_all_blocks=1 00:17:10.975 --rc geninfo_unexecuted_blocks=1 00:17:10.975 00:17:10.975 ' 00:17:10.975 04:00:45 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:10.975 04:00:45 -- nvmf/common.sh@7 -- # uname -s 00:17:10.975 04:00:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:10.975 04:00:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:10.975 04:00:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:10.976 04:00:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:10.976 04:00:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:10.976 04:00:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:10.976 04:00:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:10.976 04:00:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:10.976 04:00:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:10.976 04:00:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:10.976 04:00:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:17:10.976 04:00:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:17:10.976 04:00:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.976 04:00:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:10.976 04:00:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:10.976 04:00:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:10.976 04:00:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.976 04:00:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.976 04:00:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.976 04:00:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.976 04:00:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.976 04:00:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.976 04:00:45 -- paths/export.sh@5 -- # export PATH 00:17:10.976 04:00:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.976 04:00:45 -- nvmf/common.sh@46 -- # : 0 00:17:10.976 04:00:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:10.976 04:00:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:10.976 04:00:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:10.976 04:00:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.976 04:00:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.976 04:00:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:10.976 04:00:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:10.976 04:00:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:10.976 04:00:45 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:10.976 04:00:45 -- target/tls.sh@71 -- # nvmftestinit 00:17:10.976 04:00:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:10.976 04:00:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:10.976 04:00:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:10.976 04:00:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:10.976 04:00:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:10.976 04:00:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.976 04:00:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:10.976 04:00:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.976 04:00:45 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:10.976 04:00:45 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:10.976 04:00:45 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:10.976 04:00:45 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:10.976 04:00:45 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:10.976 04:00:45 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:10.976 04:00:45 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:10.976 04:00:45 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:10.976 04:00:45 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:10.976 04:00:45 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:10.976 04:00:45 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:10.976 04:00:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:10.976 04:00:45 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:10.976 04:00:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:10.976 04:00:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:10.976 04:00:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:10.976 04:00:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:10.976 04:00:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:10.976 04:00:45 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:10.976 04:00:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:10.976 Cannot find device "nvmf_tgt_br" 00:17:10.976 04:00:45 -- nvmf/common.sh@154 -- # true 00:17:10.976 04:00:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:10.976 Cannot find device "nvmf_tgt_br2" 00:17:10.976 04:00:45 -- nvmf/common.sh@155 -- # true 00:17:10.976 04:00:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:10.976 04:00:45 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:10.976 Cannot find device "nvmf_tgt_br" 00:17:10.976 04:00:45 -- nvmf/common.sh@157 -- # true 00:17:10.976 04:00:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:10.976 Cannot find device "nvmf_tgt_br2" 00:17:10.976 04:00:45 -- nvmf/common.sh@158 -- # true 00:17:10.976 04:00:45 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:10.976 04:00:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:10.976 04:00:46 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:10.976 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:10.976 04:00:46 -- nvmf/common.sh@161 -- # true 00:17:10.976 04:00:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:10.976 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:10.976 04:00:46 -- nvmf/common.sh@162 -- # true 00:17:10.976 04:00:46 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:10.976 04:00:46 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:10.976 04:00:46 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:10.976 04:00:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:10.976 04:00:46 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:11.235 04:00:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:11.235 04:00:46 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:11.235 04:00:46 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:11.235 04:00:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:11.235 04:00:46 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:11.235 04:00:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:11.235 04:00:46 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:11.235 04:00:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:11.235 04:00:46 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:11.235 04:00:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:11.235 04:00:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:11.235 04:00:46 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:11.235 04:00:46 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:11.235 04:00:46 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:11.235 04:00:46 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:11.235 04:00:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:11.235 04:00:46 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:11.235 04:00:46 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:11.235 04:00:46 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:11.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:11.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:17:11.235 00:17:11.235 --- 10.0.0.2 ping statistics --- 00:17:11.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.235 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:17:11.235 04:00:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:11.235 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:11.235 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:17:11.235 00:17:11.235 --- 10.0.0.3 ping statistics --- 00:17:11.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.235 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:17:11.235 04:00:46 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:11.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:11.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:17:11.235 00:17:11.235 --- 10.0.0.1 ping statistics --- 00:17:11.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.235 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:17:11.235 04:00:46 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:11.235 04:00:46 -- nvmf/common.sh@421 -- # return 0 00:17:11.235 04:00:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:11.235 04:00:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:11.235 04:00:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:11.235 04:00:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:11.235 04:00:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:11.235 04:00:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:11.235 04:00:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:11.235 04:00:46 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:11.235 04:00:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:11.235 04:00:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:11.235 04:00:46 -- common/autotest_common.sh@10 -- # set +x 00:17:11.235 04:00:46 -- nvmf/common.sh@469 -- # nvmfpid=77720 00:17:11.235 04:00:46 -- nvmf/common.sh@470 -- # waitforlisten 77720 00:17:11.235 04:00:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:11.235 04:00:46 -- common/autotest_common.sh@829 -- # '[' -z 77720 ']' 00:17:11.235 04:00:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.235 04:00:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:11.235 04:00:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.235 04:00:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:11.236 04:00:46 -- common/autotest_common.sh@10 -- # set +x 00:17:11.236 [2024-11-08 04:00:46.325071] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:11.236 [2024-11-08 04:00:46.325812] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:11.494 [2024-11-08 04:00:46.471239] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.494 [2024-11-08 04:00:46.562399] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:11.494 [2024-11-08 04:00:46.562598] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:11.495 [2024-11-08 04:00:46.562617] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:11.495 [2024-11-08 04:00:46.562629] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:11.495 [2024-11-08 04:00:46.562673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.431 04:00:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:12.431 04:00:47 -- common/autotest_common.sh@862 -- # return 0 00:17:12.431 04:00:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:12.431 04:00:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:12.431 04:00:47 -- common/autotest_common.sh@10 -- # set +x 00:17:12.431 04:00:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:12.431 04:00:47 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:17:12.431 04:00:47 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:12.690 true 00:17:12.690 04:00:47 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:12.690 04:00:47 -- target/tls.sh@82 -- # jq -r .tls_version 00:17:12.980 04:00:47 -- target/tls.sh@82 -- # version=0 00:17:12.980 04:00:47 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:17:12.980 04:00:47 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:13.239 04:00:48 -- target/tls.sh@90 -- # jq -r .tls_version 00:17:13.239 04:00:48 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:13.239 04:00:48 -- target/tls.sh@90 -- # version=13 00:17:13.239 04:00:48 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:17:13.239 04:00:48 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:13.497 04:00:48 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:13.497 04:00:48 -- target/tls.sh@98 -- # jq -r .tls_version 00:17:13.759 04:00:48 -- target/tls.sh@98 -- # version=7 00:17:13.759 04:00:48 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:17:13.759 04:00:48 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:13.759 04:00:48 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:14.017 04:00:49 -- target/tls.sh@105 -- # ktls=false 00:17:14.017 04:00:49 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:17:14.017 04:00:49 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:14.276 04:00:49 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:14.276 04:00:49 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:14.535 04:00:49 -- target/tls.sh@113 -- # ktls=true 00:17:14.535 04:00:49 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:17:14.535 04:00:49 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:14.793 04:00:49 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:14.793 04:00:49 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:17:15.052 04:00:50 -- target/tls.sh@121 -- # ktls=false 00:17:15.052 04:00:50 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:17:15.052 04:00:50 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:17:15.052 04:00:50 -- target/tls.sh@49 -- # local key hash crc 00:17:15.052 04:00:50 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:17:15.052 04:00:50 -- target/tls.sh@51 -- # hash=01 00:17:15.052 04:00:50 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:17:15.052 04:00:50 -- target/tls.sh@52 -- # gzip -1 -c 00:17:15.052 04:00:50 -- target/tls.sh@52 -- # tail -c8 00:17:15.052 04:00:50 -- target/tls.sh@52 -- # head -c 4 00:17:15.052 04:00:50 -- target/tls.sh@52 -- # crc='p$H�' 00:17:15.052 04:00:50 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:17:15.052 04:00:50 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:17:15.052 04:00:50 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:15.052 04:00:50 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:15.052 04:00:50 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:17:15.052 04:00:50 -- target/tls.sh@49 -- # local key hash crc 00:17:15.052 04:00:50 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:17:15.052 04:00:50 -- target/tls.sh@51 -- # hash=01 00:17:15.052 04:00:50 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:17:15.052 04:00:50 -- target/tls.sh@52 -- # gzip -1 -c 00:17:15.052 04:00:50 -- target/tls.sh@52 -- # tail -c8 00:17:15.052 04:00:50 -- target/tls.sh@52 -- # head -c 4 00:17:15.052 04:00:50 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:17:15.052 04:00:50 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:17:15.052 04:00:50 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:17:15.052 04:00:50 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:15.052 04:00:50 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:15.052 04:00:50 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:15.052 04:00:50 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:15.052 04:00:50 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:15.052 04:00:50 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:15.052 04:00:50 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:15.052 04:00:50 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:15.052 04:00:50 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:15.312 04:00:50 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:15.569 04:00:50 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:15.569 04:00:50 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:15.569 04:00:50 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:15.828 [2024-11-08 04:00:50.847333] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:15.828 04:00:50 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:16.086 04:00:51 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:16.345 [2024-11-08 04:00:51.295458] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:16.345 [2024-11-08 04:00:51.295689] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:16.345 04:00:51 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:16.602 malloc0 00:17:16.602 04:00:51 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:16.861 04:00:51 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:17.119 04:00:51 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:27.092 Initializing NVMe Controllers 00:17:27.092 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:27.092 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:27.092 Initialization complete. Launching workers. 00:17:27.092 ======================================================== 00:17:27.092 Latency(us) 00:17:27.092 Device Information : IOPS MiB/s Average min max 00:17:27.092 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11372.34 44.42 5628.68 1807.41 9378.56 00:17:27.092 ======================================================== 00:17:27.092 Total : 11372.34 44.42 5628.68 1807.41 9378.56 00:17:27.092 00:17:27.092 04:01:02 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:27.092 04:01:02 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:27.092 04:01:02 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:27.092 04:01:02 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:27.092 04:01:02 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:27.092 04:01:02 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:27.092 04:01:02 -- target/tls.sh@28 -- # bdevperf_pid=78080 00:17:27.092 04:01:02 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:27.092 04:01:02 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:27.092 04:01:02 -- target/tls.sh@31 -- # waitforlisten 78080 /var/tmp/bdevperf.sock 00:17:27.092 04:01:02 -- common/autotest_common.sh@829 -- # '[' -z 78080 ']' 00:17:27.092 04:01:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:27.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:27.092 04:01:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:27.092 04:01:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:27.092 04:01:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:27.092 04:01:02 -- common/autotest_common.sh@10 -- # set +x 00:17:27.351 [2024-11-08 04:01:02.231557] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:27.351 [2024-11-08 04:01:02.231651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78080 ] 00:17:27.351 [2024-11-08 04:01:02.373685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.609 [2024-11-08 04:01:02.492810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:28.176 04:01:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:28.176 04:01:03 -- common/autotest_common.sh@862 -- # return 0 00:17:28.176 04:01:03 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:28.434 [2024-11-08 04:01:03.357605] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:28.434 TLSTESTn1 00:17:28.434 04:01:03 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:28.434 Running I/O for 10 seconds... 00:17:40.635 00:17:40.635 Latency(us) 00:17:40.635 [2024-11-08T04:01:15.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.635 [2024-11-08T04:01:15.746Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:40.635 Verification LBA range: start 0x0 length 0x2000 00:17:40.635 TLSTESTn1 : 10.02 6132.83 23.96 0.00 0.00 20837.05 4200.26 17992.61 00:17:40.635 [2024-11-08T04:01:15.746Z] =================================================================================================================== 00:17:40.635 [2024-11-08T04:01:15.746Z] Total : 6132.83 23.96 0.00 0.00 20837.05 4200.26 17992.61 00:17:40.635 0 00:17:40.635 04:01:13 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:40.635 04:01:13 -- target/tls.sh@45 -- # killprocess 78080 00:17:40.635 04:01:13 -- common/autotest_common.sh@936 -- # '[' -z 78080 ']' 00:17:40.635 04:01:13 -- common/autotest_common.sh@940 -- # kill -0 78080 00:17:40.635 04:01:13 -- common/autotest_common.sh@941 -- # uname 00:17:40.635 04:01:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:40.635 04:01:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78080 00:17:40.635 killing process with pid 78080 00:17:40.635 Received shutdown signal, test time was about 10.000000 seconds 00:17:40.635 00:17:40.635 Latency(us) 00:17:40.635 [2024-11-08T04:01:15.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.635 [2024-11-08T04:01:15.746Z] =================================================================================================================== 00:17:40.635 [2024-11-08T04:01:15.746Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:40.635 04:01:13 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:40.635 04:01:13 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:40.635 04:01:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78080' 00:17:40.635 04:01:13 -- common/autotest_common.sh@955 -- # kill 78080 00:17:40.635 04:01:13 -- common/autotest_common.sh@960 -- # wait 78080 00:17:40.635 04:01:13 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:40.635 04:01:13 -- common/autotest_common.sh@650 -- # local es=0 00:17:40.635 04:01:13 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:40.635 04:01:13 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:40.635 04:01:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:40.635 04:01:13 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:40.635 04:01:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:40.635 04:01:13 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:40.636 04:01:13 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:40.636 04:01:13 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:40.636 04:01:13 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:40.636 04:01:13 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:17:40.636 04:01:13 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:40.636 04:01:13 -- target/tls.sh@28 -- # bdevperf_pid=78231 00:17:40.636 04:01:13 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:40.636 04:01:13 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:40.636 04:01:13 -- target/tls.sh@31 -- # waitforlisten 78231 /var/tmp/bdevperf.sock 00:17:40.636 04:01:13 -- common/autotest_common.sh@829 -- # '[' -z 78231 ']' 00:17:40.636 04:01:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:40.636 04:01:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:40.636 04:01:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:40.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:40.636 04:01:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:40.636 04:01:13 -- common/autotest_common.sh@10 -- # set +x 00:17:40.636 [2024-11-08 04:01:13.970257] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:40.636 [2024-11-08 04:01:13.970561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78231 ] 00:17:40.636 [2024-11-08 04:01:14.101122] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.636 [2024-11-08 04:01:14.177230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.636 04:01:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:40.636 04:01:14 -- common/autotest_common.sh@862 -- # return 0 00:17:40.636 04:01:14 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:40.636 [2024-11-08 04:01:15.106335] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:40.636 [2024-11-08 04:01:15.113139] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:40.636 [2024-11-08 04:01:15.113538] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc43d0 (107): Transport endpoint is not connected 00:17:40.636 [2024-11-08 04:01:15.114526] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc43d0 (9): Bad file descriptor 00:17:40.636 [2024-11-08 04:01:15.115522] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:40.636 [2024-11-08 04:01:15.115542] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:40.636 [2024-11-08 04:01:15.115552] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:40.636 2024/11/08 04:01:15 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:40.636 request: 00:17:40.636 { 00:17:40.636 "method": "bdev_nvme_attach_controller", 00:17:40.636 "params": { 00:17:40.636 "name": "TLSTEST", 00:17:40.636 "trtype": "tcp", 00:17:40.636 "traddr": "10.0.0.2", 00:17:40.636 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:40.636 "adrfam": "ipv4", 00:17:40.636 "trsvcid": "4420", 00:17:40.636 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.636 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt" 00:17:40.636 } 00:17:40.636 } 00:17:40.636 Got JSON-RPC error response 00:17:40.636 GoRPCClient: error on JSON-RPC call 00:17:40.636 04:01:15 -- target/tls.sh@36 -- # killprocess 78231 00:17:40.636 04:01:15 -- common/autotest_common.sh@936 -- # '[' -z 78231 ']' 00:17:40.636 04:01:15 -- common/autotest_common.sh@940 -- # kill -0 78231 00:17:40.636 04:01:15 -- common/autotest_common.sh@941 -- # uname 00:17:40.636 04:01:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:40.636 04:01:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78231 00:17:40.636 killing process with pid 78231 00:17:40.636 Received shutdown signal, test time was about 10.000000 seconds 00:17:40.636 00:17:40.636 Latency(us) 00:17:40.636 [2024-11-08T04:01:15.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.636 [2024-11-08T04:01:15.747Z] =================================================================================================================== 00:17:40.636 [2024-11-08T04:01:15.747Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:40.636 04:01:15 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:40.636 04:01:15 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:40.636 04:01:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78231' 00:17:40.636 04:01:15 -- common/autotest_common.sh@955 -- # kill 78231 00:17:40.636 04:01:15 -- common/autotest_common.sh@960 -- # wait 78231 00:17:40.636 04:01:15 -- target/tls.sh@37 -- # return 1 00:17:40.636 04:01:15 -- common/autotest_common.sh@653 -- # es=1 00:17:40.636 04:01:15 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:40.636 04:01:15 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:40.636 04:01:15 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:40.636 04:01:15 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:40.636 04:01:15 -- common/autotest_common.sh@650 -- # local es=0 00:17:40.636 04:01:15 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:40.636 04:01:15 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:40.636 04:01:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:40.636 04:01:15 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:40.636 04:01:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:40.636 04:01:15 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:40.636 04:01:15 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:40.636 04:01:15 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:40.636 04:01:15 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:40.636 04:01:15 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:40.636 04:01:15 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:40.636 04:01:15 -- target/tls.sh@28 -- # bdevperf_pid=78281 00:17:40.636 04:01:15 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:40.636 04:01:15 -- target/tls.sh@31 -- # waitforlisten 78281 /var/tmp/bdevperf.sock 00:17:40.636 04:01:15 -- common/autotest_common.sh@829 -- # '[' -z 78281 ']' 00:17:40.636 04:01:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:40.636 04:01:15 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:40.636 04:01:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:40.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:40.636 04:01:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:40.636 04:01:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:40.636 04:01:15 -- common/autotest_common.sh@10 -- # set +x 00:17:40.636 [2024-11-08 04:01:15.510460] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:40.636 [2024-11-08 04:01:15.510744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78281 ] 00:17:40.636 [2024-11-08 04:01:15.647554] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.636 [2024-11-08 04:01:15.725581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:41.577 04:01:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:41.577 04:01:16 -- common/autotest_common.sh@862 -- # return 0 00:17:41.577 04:01:16 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:41.836 [2024-11-08 04:01:16.730872] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:41.836 [2024-11-08 04:01:16.735356] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:41.836 [2024-11-08 04:01:16.735394] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:41.836 [2024-11-08 04:01:16.735501] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:41.836 [2024-11-08 04:01:16.736087] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dd3d0 (107): Transport endpoint is not connected 00:17:41.836 [2024-11-08 04:01:16.737072] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dd3d0 (9): Bad file descriptor 00:17:41.836 [2024-11-08 04:01:16.738069] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:41.836 [2024-11-08 04:01:16.738090] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:41.836 [2024-11-08 04:01:16.738105] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:41.836 2024/11/08 04:01:16 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:41.836 request: 00:17:41.836 { 00:17:41.836 "method": "bdev_nvme_attach_controller", 00:17:41.836 "params": { 00:17:41.836 "name": "TLSTEST", 00:17:41.836 "trtype": "tcp", 00:17:41.836 "traddr": "10.0.0.2", 00:17:41.836 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:41.836 "adrfam": "ipv4", 00:17:41.836 "trsvcid": "4420", 00:17:41.836 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:41.836 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:17:41.836 } 00:17:41.836 } 00:17:41.836 Got JSON-RPC error response 00:17:41.836 GoRPCClient: error on JSON-RPC call 00:17:41.836 04:01:16 -- target/tls.sh@36 -- # killprocess 78281 00:17:41.836 04:01:16 -- common/autotest_common.sh@936 -- # '[' -z 78281 ']' 00:17:41.836 04:01:16 -- common/autotest_common.sh@940 -- # kill -0 78281 00:17:41.836 04:01:16 -- common/autotest_common.sh@941 -- # uname 00:17:41.836 04:01:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:41.836 04:01:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78281 00:17:41.836 killing process with pid 78281 00:17:41.836 Received shutdown signal, test time was about 10.000000 seconds 00:17:41.836 00:17:41.836 Latency(us) 00:17:41.836 [2024-11-08T04:01:16.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.836 [2024-11-08T04:01:16.947Z] =================================================================================================================== 00:17:41.836 [2024-11-08T04:01:16.947Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:41.836 04:01:16 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:41.836 04:01:16 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:41.836 04:01:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78281' 00:17:41.836 04:01:16 -- common/autotest_common.sh@955 -- # kill 78281 00:17:41.836 04:01:16 -- common/autotest_common.sh@960 -- # wait 78281 00:17:42.095 04:01:17 -- target/tls.sh@37 -- # return 1 00:17:42.095 04:01:17 -- common/autotest_common.sh@653 -- # es=1 00:17:42.095 04:01:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:42.095 04:01:17 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:42.095 04:01:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:42.095 04:01:17 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:42.095 04:01:17 -- common/autotest_common.sh@650 -- # local es=0 00:17:42.095 04:01:17 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:42.095 04:01:17 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:42.095 04:01:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:42.095 04:01:17 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:42.095 04:01:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:42.095 04:01:17 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:42.095 04:01:17 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:42.095 04:01:17 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:42.095 04:01:17 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:42.095 04:01:17 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:42.095 04:01:17 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:42.095 04:01:17 -- target/tls.sh@28 -- # bdevperf_pid=78322 00:17:42.095 04:01:17 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:42.095 04:01:17 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:42.095 04:01:17 -- target/tls.sh@31 -- # waitforlisten 78322 /var/tmp/bdevperf.sock 00:17:42.095 04:01:17 -- common/autotest_common.sh@829 -- # '[' -z 78322 ']' 00:17:42.095 04:01:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:42.095 04:01:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:42.095 04:01:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:42.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:42.095 04:01:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:42.095 04:01:17 -- common/autotest_common.sh@10 -- # set +x 00:17:42.095 [2024-11-08 04:01:17.133336] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:42.095 [2024-11-08 04:01:17.133648] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78322 ] 00:17:42.354 [2024-11-08 04:01:17.273397] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.354 [2024-11-08 04:01:17.349536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.321 04:01:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:43.321 04:01:18 -- common/autotest_common.sh@862 -- # return 0 00:17:43.321 04:01:18 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:43.321 [2024-11-08 04:01:18.310498] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:43.321 [2024-11-08 04:01:18.315042] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:43.321 [2024-11-08 04:01:18.315079] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:43.321 [2024-11-08 04:01:18.315165] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:43.321 [2024-11-08 04:01:18.315774] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d253d0 (107): Transport endpoint is not connected 00:17:43.321 [2024-11-08 04:01:18.316754] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d253d0 (9): Bad file descriptor 00:17:43.321 [2024-11-08 04:01:18.317751] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:43.321 [2024-11-08 04:01:18.317783] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:43.321 [2024-11-08 04:01:18.317798] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:43.321 2024/11/08 04:01:18 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:43.321 request: 00:17:43.321 { 00:17:43.321 "method": "bdev_nvme_attach_controller", 00:17:43.321 "params": { 00:17:43.321 "name": "TLSTEST", 00:17:43.321 "trtype": "tcp", 00:17:43.321 "traddr": "10.0.0.2", 00:17:43.321 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:43.321 "adrfam": "ipv4", 00:17:43.321 "trsvcid": "4420", 00:17:43.321 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:43.321 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:17:43.321 } 00:17:43.321 } 00:17:43.321 Got JSON-RPC error response 00:17:43.321 GoRPCClient: error on JSON-RPC call 00:17:43.321 04:01:18 -- target/tls.sh@36 -- # killprocess 78322 00:17:43.321 04:01:18 -- common/autotest_common.sh@936 -- # '[' -z 78322 ']' 00:17:43.321 04:01:18 -- common/autotest_common.sh@940 -- # kill -0 78322 00:17:43.321 04:01:18 -- common/autotest_common.sh@941 -- # uname 00:17:43.321 04:01:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:43.321 04:01:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78322 00:17:43.321 killing process with pid 78322 00:17:43.321 Received shutdown signal, test time was about 10.000000 seconds 00:17:43.321 00:17:43.321 Latency(us) 00:17:43.321 [2024-11-08T04:01:18.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.321 [2024-11-08T04:01:18.432Z] =================================================================================================================== 00:17:43.321 [2024-11-08T04:01:18.432Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:43.321 04:01:18 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:43.321 04:01:18 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:43.321 04:01:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78322' 00:17:43.321 04:01:18 -- common/autotest_common.sh@955 -- # kill 78322 00:17:43.321 04:01:18 -- common/autotest_common.sh@960 -- # wait 78322 00:17:43.590 04:01:18 -- target/tls.sh@37 -- # return 1 00:17:43.590 04:01:18 -- common/autotest_common.sh@653 -- # es=1 00:17:43.590 04:01:18 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:43.590 04:01:18 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:43.590 04:01:18 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:43.590 04:01:18 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:43.590 04:01:18 -- common/autotest_common.sh@650 -- # local es=0 00:17:43.590 04:01:18 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:43.590 04:01:18 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:43.590 04:01:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:43.590 04:01:18 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:43.590 04:01:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:43.590 04:01:18 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:43.590 04:01:18 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:43.590 04:01:18 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:43.590 04:01:18 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:43.590 04:01:18 -- target/tls.sh@23 -- # psk= 00:17:43.590 04:01:18 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:43.590 04:01:18 -- target/tls.sh@28 -- # bdevperf_pid=78372 00:17:43.590 04:01:18 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:43.590 04:01:18 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:43.590 04:01:18 -- target/tls.sh@31 -- # waitforlisten 78372 /var/tmp/bdevperf.sock 00:17:43.590 04:01:18 -- common/autotest_common.sh@829 -- # '[' -z 78372 ']' 00:17:43.590 04:01:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:43.590 04:01:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:43.590 04:01:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:43.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:43.590 04:01:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:43.590 04:01:18 -- common/autotest_common.sh@10 -- # set +x 00:17:43.849 [2024-11-08 04:01:18.708348] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:43.849 [2024-11-08 04:01:18.708628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78372 ] 00:17:43.849 [2024-11-08 04:01:18.842760] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.849 [2024-11-08 04:01:18.922925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:44.786 04:01:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:44.786 04:01:19 -- common/autotest_common.sh@862 -- # return 0 00:17:44.786 04:01:19 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:45.044 [2024-11-08 04:01:19.903459] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:45.044 [2024-11-08 04:01:19.904695] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a00dc0 (9): Bad file descriptor 00:17:45.044 [2024-11-08 04:01:19.905688] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:45.044 [2024-11-08 04:01:19.905710] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:45.044 [2024-11-08 04:01:19.905721] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:45.044 2024/11/08 04:01:19 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:45.044 request: 00:17:45.044 { 00:17:45.044 "method": "bdev_nvme_attach_controller", 00:17:45.044 "params": { 00:17:45.044 "name": "TLSTEST", 00:17:45.044 "trtype": "tcp", 00:17:45.044 "traddr": "10.0.0.2", 00:17:45.044 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:45.044 "adrfam": "ipv4", 00:17:45.044 "trsvcid": "4420", 00:17:45.044 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:17:45.044 } 00:17:45.044 } 00:17:45.044 Got JSON-RPC error response 00:17:45.044 GoRPCClient: error on JSON-RPC call 00:17:45.044 04:01:19 -- target/tls.sh@36 -- # killprocess 78372 00:17:45.044 04:01:19 -- common/autotest_common.sh@936 -- # '[' -z 78372 ']' 00:17:45.044 04:01:19 -- common/autotest_common.sh@940 -- # kill -0 78372 00:17:45.044 04:01:19 -- common/autotest_common.sh@941 -- # uname 00:17:45.044 04:01:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:45.044 04:01:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78372 00:17:45.044 killing process with pid 78372 00:17:45.044 Received shutdown signal, test time was about 10.000000 seconds 00:17:45.044 00:17:45.044 Latency(us) 00:17:45.044 [2024-11-08T04:01:20.155Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.044 [2024-11-08T04:01:20.155Z] =================================================================================================================== 00:17:45.044 [2024-11-08T04:01:20.155Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:45.044 04:01:19 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:45.044 04:01:19 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:45.044 04:01:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78372' 00:17:45.044 04:01:19 -- common/autotest_common.sh@955 -- # kill 78372 00:17:45.044 04:01:19 -- common/autotest_common.sh@960 -- # wait 78372 00:17:45.303 04:01:20 -- target/tls.sh@37 -- # return 1 00:17:45.303 04:01:20 -- common/autotest_common.sh@653 -- # es=1 00:17:45.303 04:01:20 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:45.303 04:01:20 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:45.303 04:01:20 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:45.303 04:01:20 -- target/tls.sh@167 -- # killprocess 77720 00:17:45.303 04:01:20 -- common/autotest_common.sh@936 -- # '[' -z 77720 ']' 00:17:45.303 04:01:20 -- common/autotest_common.sh@940 -- # kill -0 77720 00:17:45.303 04:01:20 -- common/autotest_common.sh@941 -- # uname 00:17:45.303 04:01:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:45.303 04:01:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77720 00:17:45.303 killing process with pid 77720 00:17:45.303 04:01:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:45.303 04:01:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:45.303 04:01:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77720' 00:17:45.303 04:01:20 -- common/autotest_common.sh@955 -- # kill 77720 00:17:45.303 04:01:20 -- common/autotest_common.sh@960 -- # wait 77720 00:17:45.562 04:01:20 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:17:45.562 04:01:20 -- target/tls.sh@49 -- # local key hash crc 00:17:45.562 04:01:20 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:45.562 04:01:20 -- target/tls.sh@51 -- # hash=02 00:17:45.562 04:01:20 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:17:45.562 04:01:20 -- target/tls.sh@52 -- # gzip -1 -c 00:17:45.562 04:01:20 -- target/tls.sh@52 -- # head -c 4 00:17:45.562 04:01:20 -- target/tls.sh@52 -- # tail -c8 00:17:45.562 04:01:20 -- target/tls.sh@52 -- # crc='�e�'\''' 00:17:45.562 04:01:20 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:17:45.562 04:01:20 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:17:45.562 04:01:20 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:45.562 04:01:20 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:45.562 04:01:20 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:45.562 04:01:20 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:45.562 04:01:20 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:45.562 04:01:20 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:17:45.562 04:01:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:45.562 04:01:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:45.562 04:01:20 -- common/autotest_common.sh@10 -- # set +x 00:17:45.562 04:01:20 -- nvmf/common.sh@469 -- # nvmfpid=78430 00:17:45.562 04:01:20 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:45.562 04:01:20 -- nvmf/common.sh@470 -- # waitforlisten 78430 00:17:45.562 04:01:20 -- common/autotest_common.sh@829 -- # '[' -z 78430 ']' 00:17:45.562 04:01:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.562 04:01:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:45.562 04:01:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.562 04:01:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:45.562 04:01:20 -- common/autotest_common.sh@10 -- # set +x 00:17:45.562 [2024-11-08 04:01:20.610269] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:45.562 [2024-11-08 04:01:20.610369] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:45.821 [2024-11-08 04:01:20.746501] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.821 [2024-11-08 04:01:20.819226] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:45.821 [2024-11-08 04:01:20.819364] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:45.821 [2024-11-08 04:01:20.819376] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:45.821 [2024-11-08 04:01:20.819385] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:45.821 [2024-11-08 04:01:20.819433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.757 04:01:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:46.757 04:01:21 -- common/autotest_common.sh@862 -- # return 0 00:17:46.757 04:01:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:46.757 04:01:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:46.757 04:01:21 -- common/autotest_common.sh@10 -- # set +x 00:17:46.757 04:01:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:46.757 04:01:21 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:46.757 04:01:21 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:46.757 04:01:21 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:47.016 [2024-11-08 04:01:21.903546] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:47.016 04:01:21 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:47.275 04:01:22 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:47.534 [2024-11-08 04:01:22.399639] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:47.534 [2024-11-08 04:01:22.399933] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.534 04:01:22 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:47.534 malloc0 00:17:47.792 04:01:22 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:47.792 04:01:22 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:48.051 04:01:23 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:48.051 04:01:23 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:48.051 04:01:23 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:48.051 04:01:23 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:48.051 04:01:23 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:48.051 04:01:23 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:48.051 04:01:23 -- target/tls.sh@28 -- # bdevperf_pid=78531 00:17:48.051 04:01:23 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:48.051 04:01:23 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:48.051 04:01:23 -- target/tls.sh@31 -- # waitforlisten 78531 /var/tmp/bdevperf.sock 00:17:48.051 04:01:23 -- common/autotest_common.sh@829 -- # '[' -z 78531 ']' 00:17:48.051 04:01:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:48.051 04:01:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:48.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:48.051 04:01:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:48.051 04:01:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:48.051 04:01:23 -- common/autotest_common.sh@10 -- # set +x 00:17:48.051 [2024-11-08 04:01:23.115827] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:48.051 [2024-11-08 04:01:23.115936] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78531 ] 00:17:48.309 [2024-11-08 04:01:23.253872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.309 [2024-11-08 04:01:23.361496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:49.245 04:01:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:49.245 04:01:24 -- common/autotest_common.sh@862 -- # return 0 00:17:49.245 04:01:24 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:49.245 [2024-11-08 04:01:24.223373] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:49.245 TLSTESTn1 00:17:49.245 04:01:24 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:49.504 Running I/O for 10 seconds... 00:17:59.480 00:17:59.480 Latency(us) 00:17:59.480 [2024-11-08T04:01:34.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.480 [2024-11-08T04:01:34.591Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:59.480 Verification LBA range: start 0x0 length 0x2000 00:17:59.480 TLSTESTn1 : 10.02 5930.72 23.17 0.00 0.00 21547.50 4319.42 19779.96 00:17:59.480 [2024-11-08T04:01:34.591Z] =================================================================================================================== 00:17:59.480 [2024-11-08T04:01:34.591Z] Total : 5930.72 23.17 0.00 0.00 21547.50 4319.42 19779.96 00:17:59.480 0 00:17:59.480 04:01:34 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:59.480 04:01:34 -- target/tls.sh@45 -- # killprocess 78531 00:17:59.480 04:01:34 -- common/autotest_common.sh@936 -- # '[' -z 78531 ']' 00:17:59.480 04:01:34 -- common/autotest_common.sh@940 -- # kill -0 78531 00:17:59.480 04:01:34 -- common/autotest_common.sh@941 -- # uname 00:17:59.480 04:01:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:59.480 04:01:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78531 00:17:59.480 04:01:34 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:59.480 04:01:34 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:59.480 killing process with pid 78531 00:17:59.480 04:01:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78531' 00:17:59.480 Received shutdown signal, test time was about 10.000000 seconds 00:17:59.480 00:17:59.480 Latency(us) 00:17:59.480 [2024-11-08T04:01:34.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.480 [2024-11-08T04:01:34.591Z] =================================================================================================================== 00:17:59.480 [2024-11-08T04:01:34.591Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:59.480 04:01:34 -- common/autotest_common.sh@955 -- # kill 78531 00:17:59.480 04:01:34 -- common/autotest_common.sh@960 -- # wait 78531 00:17:59.740 04:01:34 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:59.740 04:01:34 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:59.740 04:01:34 -- common/autotest_common.sh@650 -- # local es=0 00:17:59.740 04:01:34 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:59.740 04:01:34 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:59.740 04:01:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:59.740 04:01:34 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:59.740 04:01:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:59.740 04:01:34 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:59.740 04:01:34 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:59.740 04:01:34 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:59.740 04:01:34 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:59.740 04:01:34 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:59.740 04:01:34 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:59.740 04:01:34 -- target/tls.sh@28 -- # bdevperf_pid=78678 00:17:59.740 04:01:34 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:59.740 04:01:34 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:59.740 04:01:34 -- target/tls.sh@31 -- # waitforlisten 78678 /var/tmp/bdevperf.sock 00:17:59.740 04:01:34 -- common/autotest_common.sh@829 -- # '[' -z 78678 ']' 00:17:59.740 04:01:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:59.740 04:01:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:59.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:59.740 04:01:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:59.740 04:01:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:59.740 04:01:34 -- common/autotest_common.sh@10 -- # set +x 00:17:59.740 [2024-11-08 04:01:34.833038] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:59.740 [2024-11-08 04:01:34.833145] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78678 ] 00:17:59.999 [2024-11-08 04:01:34.973956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.999 [2024-11-08 04:01:35.051770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:00.936 04:01:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:00.936 04:01:35 -- common/autotest_common.sh@862 -- # return 0 00:18:00.936 04:01:35 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:00.936 [2024-11-08 04:01:36.033300] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:00.936 [2024-11-08 04:01:36.033361] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:00.936 2024/11/08 04:01:36 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-22 Msg=Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:00.936 request: 00:18:00.936 { 00:18:00.936 "method": "bdev_nvme_attach_controller", 00:18:00.936 "params": { 00:18:00.936 "name": "TLSTEST", 00:18:00.936 "trtype": "tcp", 00:18:00.936 "traddr": "10.0.0.2", 00:18:00.936 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:00.936 "adrfam": "ipv4", 00:18:00.936 "trsvcid": "4420", 00:18:00.936 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.936 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:18:00.936 } 00:18:00.936 } 00:18:00.936 Got JSON-RPC error response 00:18:00.936 GoRPCClient: error on JSON-RPC call 00:18:01.195 04:01:36 -- target/tls.sh@36 -- # killprocess 78678 00:18:01.195 04:01:36 -- common/autotest_common.sh@936 -- # '[' -z 78678 ']' 00:18:01.195 04:01:36 -- common/autotest_common.sh@940 -- # kill -0 78678 00:18:01.195 04:01:36 -- common/autotest_common.sh@941 -- # uname 00:18:01.195 04:01:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:01.195 04:01:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78678 00:18:01.195 04:01:36 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:01.195 04:01:36 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:01.195 04:01:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78678' 00:18:01.195 killing process with pid 78678 00:18:01.195 04:01:36 -- common/autotest_common.sh@955 -- # kill 78678 00:18:01.195 Received shutdown signal, test time was about 10.000000 seconds 00:18:01.195 00:18:01.195 Latency(us) 00:18:01.195 [2024-11-08T04:01:36.306Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.195 [2024-11-08T04:01:36.306Z] =================================================================================================================== 00:18:01.195 [2024-11-08T04:01:36.306Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:01.195 04:01:36 -- common/autotest_common.sh@960 -- # wait 78678 00:18:01.455 04:01:36 -- target/tls.sh@37 -- # return 1 00:18:01.455 04:01:36 -- common/autotest_common.sh@653 -- # es=1 00:18:01.455 04:01:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:01.455 04:01:36 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:01.455 04:01:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:01.455 04:01:36 -- target/tls.sh@183 -- # killprocess 78430 00:18:01.455 04:01:36 -- common/autotest_common.sh@936 -- # '[' -z 78430 ']' 00:18:01.455 04:01:36 -- common/autotest_common.sh@940 -- # kill -0 78430 00:18:01.455 04:01:36 -- common/autotest_common.sh@941 -- # uname 00:18:01.455 04:01:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:01.455 04:01:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78430 00:18:01.455 04:01:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:01.455 04:01:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:01.455 killing process with pid 78430 00:18:01.455 04:01:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78430' 00:18:01.455 04:01:36 -- common/autotest_common.sh@955 -- # kill 78430 00:18:01.455 04:01:36 -- common/autotest_common.sh@960 -- # wait 78430 00:18:01.714 04:01:36 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:01.714 04:01:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:01.714 04:01:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:01.714 04:01:36 -- common/autotest_common.sh@10 -- # set +x 00:18:01.714 04:01:36 -- nvmf/common.sh@469 -- # nvmfpid=78734 00:18:01.714 04:01:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:01.714 04:01:36 -- nvmf/common.sh@470 -- # waitforlisten 78734 00:18:01.714 04:01:36 -- common/autotest_common.sh@829 -- # '[' -z 78734 ']' 00:18:01.714 04:01:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.714 04:01:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:01.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.714 04:01:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.714 04:01:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:01.714 04:01:36 -- common/autotest_common.sh@10 -- # set +x 00:18:01.714 [2024-11-08 04:01:36.685455] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:01.714 [2024-11-08 04:01:36.685520] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.714 [2024-11-08 04:01:36.816641] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.974 [2024-11-08 04:01:36.905242] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:01.974 [2024-11-08 04:01:36.905376] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:01.974 [2024-11-08 04:01:36.905389] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:01.974 [2024-11-08 04:01:36.905398] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:01.974 [2024-11-08 04:01:36.905478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.542 04:01:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:02.542 04:01:37 -- common/autotest_common.sh@862 -- # return 0 00:18:02.542 04:01:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:02.542 04:01:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:02.542 04:01:37 -- common/autotest_common.sh@10 -- # set +x 00:18:02.542 04:01:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:02.542 04:01:37 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:02.542 04:01:37 -- common/autotest_common.sh@650 -- # local es=0 00:18:02.542 04:01:37 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:02.542 04:01:37 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:18:02.542 04:01:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:02.542 04:01:37 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:18:02.542 04:01:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:02.542 04:01:37 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:02.542 04:01:37 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:02.542 04:01:37 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:02.802 [2024-11-08 04:01:37.867574] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:02.802 04:01:37 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:03.061 04:01:38 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:03.320 [2024-11-08 04:01:38.271658] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:03.320 [2024-11-08 04:01:38.271968] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:03.320 04:01:38 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:03.580 malloc0 00:18:03.580 04:01:38 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:03.839 04:01:38 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:03.839 [2024-11-08 04:01:38.871333] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:03.839 [2024-11-08 04:01:38.871376] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:03.839 [2024-11-08 04:01:38.871393] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:18:03.839 2024/11/08 04:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:18:03.839 request: 00:18:03.839 { 00:18:03.839 "method": "nvmf_subsystem_add_host", 00:18:03.839 "params": { 00:18:03.839 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.839 "host": "nqn.2016-06.io.spdk:host1", 00:18:03.839 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:18:03.839 } 00:18:03.839 } 00:18:03.839 Got JSON-RPC error response 00:18:03.839 GoRPCClient: error on JSON-RPC call 00:18:03.839 04:01:38 -- common/autotest_common.sh@653 -- # es=1 00:18:03.839 04:01:38 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:03.839 04:01:38 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:03.839 04:01:38 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:03.839 04:01:38 -- target/tls.sh@189 -- # killprocess 78734 00:18:03.839 04:01:38 -- common/autotest_common.sh@936 -- # '[' -z 78734 ']' 00:18:03.839 04:01:38 -- common/autotest_common.sh@940 -- # kill -0 78734 00:18:03.839 04:01:38 -- common/autotest_common.sh@941 -- # uname 00:18:03.839 04:01:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:03.839 04:01:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78734 00:18:03.839 04:01:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:03.839 04:01:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:03.839 killing process with pid 78734 00:18:03.839 04:01:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78734' 00:18:03.840 04:01:38 -- common/autotest_common.sh@955 -- # kill 78734 00:18:03.840 04:01:38 -- common/autotest_common.sh@960 -- # wait 78734 00:18:04.099 04:01:39 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:04.099 04:01:39 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:18:04.099 04:01:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:04.099 04:01:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:04.099 04:01:39 -- common/autotest_common.sh@10 -- # set +x 00:18:04.099 04:01:39 -- nvmf/common.sh@469 -- # nvmfpid=78845 00:18:04.100 04:01:39 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:04.100 04:01:39 -- nvmf/common.sh@470 -- # waitforlisten 78845 00:18:04.100 04:01:39 -- common/autotest_common.sh@829 -- # '[' -z 78845 ']' 00:18:04.100 04:01:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.100 04:01:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:04.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.100 04:01:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.100 04:01:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:04.100 04:01:39 -- common/autotest_common.sh@10 -- # set +x 00:18:04.386 [2024-11-08 04:01:39.231998] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:04.386 [2024-11-08 04:01:39.232089] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:04.386 [2024-11-08 04:01:39.370469] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.386 [2024-11-08 04:01:39.438101] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:04.386 [2024-11-08 04:01:39.438243] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:04.386 [2024-11-08 04:01:39.438256] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:04.386 [2024-11-08 04:01:39.438263] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:04.386 [2024-11-08 04:01:39.438295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.983 04:01:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:04.983 04:01:40 -- common/autotest_common.sh@862 -- # return 0 00:18:04.983 04:01:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:04.983 04:01:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:04.983 04:01:40 -- common/autotest_common.sh@10 -- # set +x 00:18:05.242 04:01:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:05.242 04:01:40 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:05.242 04:01:40 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:05.242 04:01:40 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:05.501 [2024-11-08 04:01:40.377970] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:05.501 04:01:40 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:05.760 04:01:40 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:06.019 [2024-11-08 04:01:40.974140] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:06.019 [2024-11-08 04:01:40.974428] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:06.019 04:01:40 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:06.278 malloc0 00:18:06.278 04:01:41 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:06.537 04:01:41 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:06.796 04:01:41 -- target/tls.sh@197 -- # bdevperf_pid=78943 00:18:06.796 04:01:41 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:06.796 04:01:41 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:06.796 04:01:41 -- target/tls.sh@200 -- # waitforlisten 78943 /var/tmp/bdevperf.sock 00:18:06.796 04:01:41 -- common/autotest_common.sh@829 -- # '[' -z 78943 ']' 00:18:06.796 04:01:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:06.796 04:01:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:06.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:06.796 04:01:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:06.796 04:01:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:06.796 04:01:41 -- common/autotest_common.sh@10 -- # set +x 00:18:06.796 [2024-11-08 04:01:41.747842] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:06.796 [2024-11-08 04:01:41.747916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78943 ] 00:18:06.796 [2024-11-08 04:01:41.884442] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.055 [2024-11-08 04:01:41.992026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:07.623 04:01:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:07.623 04:01:42 -- common/autotest_common.sh@862 -- # return 0 00:18:07.623 04:01:42 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:07.882 [2024-11-08 04:01:42.912051] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:07.882 TLSTESTn1 00:18:08.140 04:01:42 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:18:08.399 04:01:43 -- target/tls.sh@205 -- # tgtconf='{ 00:18:08.399 "subsystems": [ 00:18:08.399 { 00:18:08.399 "subsystem": "iobuf", 00:18:08.399 "config": [ 00:18:08.399 { 00:18:08.399 "method": "iobuf_set_options", 00:18:08.399 "params": { 00:18:08.399 "large_bufsize": 135168, 00:18:08.399 "large_pool_count": 1024, 00:18:08.399 "small_bufsize": 8192, 00:18:08.399 "small_pool_count": 8192 00:18:08.399 } 00:18:08.399 } 00:18:08.399 ] 00:18:08.399 }, 00:18:08.399 { 00:18:08.399 "subsystem": "sock", 00:18:08.399 "config": [ 00:18:08.399 { 00:18:08.399 "method": "sock_impl_set_options", 00:18:08.399 "params": { 00:18:08.399 "enable_ktls": false, 00:18:08.399 "enable_placement_id": 0, 00:18:08.399 "enable_quickack": false, 00:18:08.399 "enable_recv_pipe": true, 00:18:08.399 "enable_zerocopy_send_client": false, 00:18:08.399 "enable_zerocopy_send_server": true, 00:18:08.399 "impl_name": "posix", 00:18:08.399 "recv_buf_size": 2097152, 00:18:08.399 "send_buf_size": 2097152, 00:18:08.399 "tls_version": 0, 00:18:08.399 "zerocopy_threshold": 0 00:18:08.399 } 00:18:08.399 }, 00:18:08.399 { 00:18:08.399 "method": "sock_impl_set_options", 00:18:08.399 "params": { 00:18:08.399 "enable_ktls": false, 00:18:08.399 "enable_placement_id": 0, 00:18:08.399 "enable_quickack": false, 00:18:08.399 "enable_recv_pipe": true, 00:18:08.399 "enable_zerocopy_send_client": false, 00:18:08.399 "enable_zerocopy_send_server": true, 00:18:08.399 "impl_name": "ssl", 00:18:08.399 "recv_buf_size": 4096, 00:18:08.399 "send_buf_size": 4096, 00:18:08.399 "tls_version": 0, 00:18:08.399 "zerocopy_threshold": 0 00:18:08.399 } 00:18:08.399 } 00:18:08.399 ] 00:18:08.399 }, 00:18:08.399 { 00:18:08.399 "subsystem": "vmd", 00:18:08.399 "config": [] 00:18:08.399 }, 00:18:08.399 { 00:18:08.399 "subsystem": "accel", 00:18:08.399 "config": [ 00:18:08.399 { 00:18:08.399 "method": "accel_set_options", 00:18:08.399 "params": { 00:18:08.399 "buf_count": 2048, 00:18:08.399 "large_cache_size": 16, 00:18:08.399 "sequence_count": 2048, 00:18:08.399 "small_cache_size": 128, 00:18:08.399 "task_count": 2048 00:18:08.399 } 00:18:08.399 } 00:18:08.399 ] 00:18:08.399 }, 00:18:08.399 { 00:18:08.399 "subsystem": "bdev", 00:18:08.399 "config": [ 00:18:08.399 { 00:18:08.399 "method": "bdev_set_options", 00:18:08.399 "params": { 00:18:08.399 "bdev_auto_examine": true, 00:18:08.399 "bdev_io_cache_size": 256, 00:18:08.399 "bdev_io_pool_size": 65535, 00:18:08.399 "iobuf_large_cache_size": 16, 00:18:08.399 "iobuf_small_cache_size": 128 00:18:08.399 } 00:18:08.399 }, 00:18:08.399 { 00:18:08.399 "method": "bdev_raid_set_options", 00:18:08.399 "params": { 00:18:08.399 "process_window_size_kb": 1024 00:18:08.399 } 00:18:08.399 }, 00:18:08.399 { 00:18:08.399 "method": "bdev_iscsi_set_options", 00:18:08.399 "params": { 00:18:08.399 "timeout_sec": 30 00:18:08.399 } 00:18:08.399 }, 00:18:08.399 { 00:18:08.399 "method": "bdev_nvme_set_options", 00:18:08.399 "params": { 00:18:08.399 "action_on_timeout": "none", 00:18:08.399 "allow_accel_sequence": false, 00:18:08.399 "arbitration_burst": 0, 00:18:08.399 "bdev_retry_count": 3, 00:18:08.399 "ctrlr_loss_timeout_sec": 0, 00:18:08.399 "delay_cmd_submit": true, 00:18:08.399 "fast_io_fail_timeout_sec": 0, 00:18:08.399 "generate_uuids": false, 00:18:08.399 "high_priority_weight": 0, 00:18:08.399 "io_path_stat": false, 00:18:08.399 "io_queue_requests": 0, 00:18:08.399 "keep_alive_timeout_ms": 10000, 00:18:08.399 "low_priority_weight": 0, 00:18:08.399 "medium_priority_weight": 0, 00:18:08.399 "nvme_adminq_poll_period_us": 10000, 00:18:08.399 "nvme_ioq_poll_period_us": 0, 00:18:08.399 "reconnect_delay_sec": 0, 00:18:08.399 "timeout_admin_us": 0, 00:18:08.399 "timeout_us": 0, 00:18:08.399 "transport_ack_timeout": 0, 00:18:08.399 "transport_retry_count": 4, 00:18:08.399 "transport_tos": 0 00:18:08.399 } 00:18:08.399 }, 00:18:08.399 { 00:18:08.399 "method": "bdev_nvme_set_hotplug", 00:18:08.399 "params": { 00:18:08.399 "enable": false, 00:18:08.399 "period_us": 100000 00:18:08.399 } 00:18:08.399 }, 00:18:08.399 { 00:18:08.399 "method": "bdev_malloc_create", 00:18:08.399 "params": { 00:18:08.399 "block_size": 4096, 00:18:08.399 "name": "malloc0", 00:18:08.399 "num_blocks": 8192, 00:18:08.399 "optimal_io_boundary": 0, 00:18:08.399 "physical_block_size": 4096, 00:18:08.399 "uuid": "8ba29ed3-0cce-4360-b28c-d8525ecf4a05" 00:18:08.399 } 00:18:08.399 }, 00:18:08.399 { 00:18:08.399 "method": "bdev_wait_for_examine" 00:18:08.399 } 00:18:08.399 ] 00:18:08.399 }, 00:18:08.399 { 00:18:08.399 "subsystem": "nbd", 00:18:08.399 "config": [] 00:18:08.399 }, 00:18:08.399 { 00:18:08.399 "subsystem": "scheduler", 00:18:08.399 "config": [ 00:18:08.399 { 00:18:08.399 "method": "framework_set_scheduler", 00:18:08.399 "params": { 00:18:08.399 "name": "static" 00:18:08.399 } 00:18:08.399 } 00:18:08.399 ] 00:18:08.399 }, 00:18:08.399 { 00:18:08.399 "subsystem": "nvmf", 00:18:08.399 "config": [ 00:18:08.399 { 00:18:08.399 "method": "nvmf_set_config", 00:18:08.399 "params": { 00:18:08.399 "admin_cmd_passthru": { 00:18:08.400 "identify_ctrlr": false 00:18:08.400 }, 00:18:08.400 "discovery_filter": "match_any" 00:18:08.400 } 00:18:08.400 }, 00:18:08.400 { 00:18:08.400 "method": "nvmf_set_max_subsystems", 00:18:08.400 "params": { 00:18:08.400 "max_subsystems": 1024 00:18:08.400 } 00:18:08.400 }, 00:18:08.400 { 00:18:08.400 "method": "nvmf_set_crdt", 00:18:08.400 "params": { 00:18:08.400 "crdt1": 0, 00:18:08.400 "crdt2": 0, 00:18:08.400 "crdt3": 0 00:18:08.400 } 00:18:08.400 }, 00:18:08.400 { 00:18:08.400 "method": "nvmf_create_transport", 00:18:08.400 "params": { 00:18:08.400 "abort_timeout_sec": 1, 00:18:08.400 "buf_cache_size": 4294967295, 00:18:08.400 "c2h_success": false, 00:18:08.400 "dif_insert_or_strip": false, 00:18:08.400 "in_capsule_data_size": 4096, 00:18:08.400 "io_unit_size": 131072, 00:18:08.400 "max_aq_depth": 128, 00:18:08.400 "max_io_qpairs_per_ctrlr": 127, 00:18:08.400 "max_io_size": 131072, 00:18:08.400 "max_queue_depth": 128, 00:18:08.400 "num_shared_buffers": 511, 00:18:08.400 "sock_priority": 0, 00:18:08.400 "trtype": "TCP", 00:18:08.400 "zcopy": false 00:18:08.400 } 00:18:08.400 }, 00:18:08.400 { 00:18:08.400 "method": "nvmf_create_subsystem", 00:18:08.400 "params": { 00:18:08.400 "allow_any_host": false, 00:18:08.400 "ana_reporting": false, 00:18:08.400 "max_cntlid": 65519, 00:18:08.400 "max_namespaces": 10, 00:18:08.400 "min_cntlid": 1, 00:18:08.400 "model_number": "SPDK bdev Controller", 00:18:08.400 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.400 "serial_number": "SPDK00000000000001" 00:18:08.400 } 00:18:08.400 }, 00:18:08.400 { 00:18:08.400 "method": "nvmf_subsystem_add_host", 00:18:08.400 "params": { 00:18:08.400 "host": "nqn.2016-06.io.spdk:host1", 00:18:08.400 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.400 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:18:08.400 } 00:18:08.400 }, 00:18:08.400 { 00:18:08.400 "method": "nvmf_subsystem_add_ns", 00:18:08.400 "params": { 00:18:08.400 "namespace": { 00:18:08.400 "bdev_name": "malloc0", 00:18:08.400 "nguid": "8BA29ED30CCE4360B28CD8525ECF4A05", 00:18:08.400 "nsid": 1, 00:18:08.400 "uuid": "8ba29ed3-0cce-4360-b28c-d8525ecf4a05" 00:18:08.400 }, 00:18:08.400 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:18:08.400 } 00:18:08.400 }, 00:18:08.400 { 00:18:08.400 "method": "nvmf_subsystem_add_listener", 00:18:08.400 "params": { 00:18:08.400 "listen_address": { 00:18:08.400 "adrfam": "IPv4", 00:18:08.400 "traddr": "10.0.0.2", 00:18:08.400 "trsvcid": "4420", 00:18:08.400 "trtype": "TCP" 00:18:08.400 }, 00:18:08.400 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.400 "secure_channel": true 00:18:08.400 } 00:18:08.400 } 00:18:08.400 ] 00:18:08.400 } 00:18:08.400 ] 00:18:08.400 }' 00:18:08.400 04:01:43 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:08.659 04:01:43 -- target/tls.sh@206 -- # bdevperfconf='{ 00:18:08.659 "subsystems": [ 00:18:08.659 { 00:18:08.660 "subsystem": "iobuf", 00:18:08.660 "config": [ 00:18:08.660 { 00:18:08.660 "method": "iobuf_set_options", 00:18:08.660 "params": { 00:18:08.660 "large_bufsize": 135168, 00:18:08.660 "large_pool_count": 1024, 00:18:08.660 "small_bufsize": 8192, 00:18:08.660 "small_pool_count": 8192 00:18:08.660 } 00:18:08.660 } 00:18:08.660 ] 00:18:08.660 }, 00:18:08.660 { 00:18:08.660 "subsystem": "sock", 00:18:08.660 "config": [ 00:18:08.660 { 00:18:08.660 "method": "sock_impl_set_options", 00:18:08.660 "params": { 00:18:08.660 "enable_ktls": false, 00:18:08.660 "enable_placement_id": 0, 00:18:08.660 "enable_quickack": false, 00:18:08.660 "enable_recv_pipe": true, 00:18:08.660 "enable_zerocopy_send_client": false, 00:18:08.660 "enable_zerocopy_send_server": true, 00:18:08.660 "impl_name": "posix", 00:18:08.660 "recv_buf_size": 2097152, 00:18:08.660 "send_buf_size": 2097152, 00:18:08.660 "tls_version": 0, 00:18:08.660 "zerocopy_threshold": 0 00:18:08.660 } 00:18:08.660 }, 00:18:08.660 { 00:18:08.660 "method": "sock_impl_set_options", 00:18:08.660 "params": { 00:18:08.660 "enable_ktls": false, 00:18:08.660 "enable_placement_id": 0, 00:18:08.660 "enable_quickack": false, 00:18:08.660 "enable_recv_pipe": true, 00:18:08.660 "enable_zerocopy_send_client": false, 00:18:08.660 "enable_zerocopy_send_server": true, 00:18:08.660 "impl_name": "ssl", 00:18:08.660 "recv_buf_size": 4096, 00:18:08.660 "send_buf_size": 4096, 00:18:08.660 "tls_version": 0, 00:18:08.660 "zerocopy_threshold": 0 00:18:08.660 } 00:18:08.660 } 00:18:08.660 ] 00:18:08.660 }, 00:18:08.660 { 00:18:08.660 "subsystem": "vmd", 00:18:08.660 "config": [] 00:18:08.660 }, 00:18:08.660 { 00:18:08.660 "subsystem": "accel", 00:18:08.660 "config": [ 00:18:08.660 { 00:18:08.660 "method": "accel_set_options", 00:18:08.660 "params": { 00:18:08.660 "buf_count": 2048, 00:18:08.660 "large_cache_size": 16, 00:18:08.660 "sequence_count": 2048, 00:18:08.660 "small_cache_size": 128, 00:18:08.660 "task_count": 2048 00:18:08.660 } 00:18:08.660 } 00:18:08.660 ] 00:18:08.660 }, 00:18:08.660 { 00:18:08.660 "subsystem": "bdev", 00:18:08.660 "config": [ 00:18:08.660 { 00:18:08.660 "method": "bdev_set_options", 00:18:08.660 "params": { 00:18:08.660 "bdev_auto_examine": true, 00:18:08.660 "bdev_io_cache_size": 256, 00:18:08.660 "bdev_io_pool_size": 65535, 00:18:08.660 "iobuf_large_cache_size": 16, 00:18:08.660 "iobuf_small_cache_size": 128 00:18:08.660 } 00:18:08.660 }, 00:18:08.660 { 00:18:08.660 "method": "bdev_raid_set_options", 00:18:08.660 "params": { 00:18:08.660 "process_window_size_kb": 1024 00:18:08.660 } 00:18:08.660 }, 00:18:08.660 { 00:18:08.660 "method": "bdev_iscsi_set_options", 00:18:08.660 "params": { 00:18:08.660 "timeout_sec": 30 00:18:08.660 } 00:18:08.660 }, 00:18:08.660 { 00:18:08.660 "method": "bdev_nvme_set_options", 00:18:08.660 "params": { 00:18:08.660 "action_on_timeout": "none", 00:18:08.660 "allow_accel_sequence": false, 00:18:08.660 "arbitration_burst": 0, 00:18:08.660 "bdev_retry_count": 3, 00:18:08.660 "ctrlr_loss_timeout_sec": 0, 00:18:08.660 "delay_cmd_submit": true, 00:18:08.660 "fast_io_fail_timeout_sec": 0, 00:18:08.660 "generate_uuids": false, 00:18:08.660 "high_priority_weight": 0, 00:18:08.660 "io_path_stat": false, 00:18:08.660 "io_queue_requests": 512, 00:18:08.660 "keep_alive_timeout_ms": 10000, 00:18:08.660 "low_priority_weight": 0, 00:18:08.660 "medium_priority_weight": 0, 00:18:08.660 "nvme_adminq_poll_period_us": 10000, 00:18:08.660 "nvme_ioq_poll_period_us": 0, 00:18:08.660 "reconnect_delay_sec": 0, 00:18:08.660 "timeout_admin_us": 0, 00:18:08.660 "timeout_us": 0, 00:18:08.660 "transport_ack_timeout": 0, 00:18:08.660 "transport_retry_count": 4, 00:18:08.660 "transport_tos": 0 00:18:08.660 } 00:18:08.660 }, 00:18:08.660 { 00:18:08.660 "method": "bdev_nvme_attach_controller", 00:18:08.660 "params": { 00:18:08.660 "adrfam": "IPv4", 00:18:08.660 "ctrlr_loss_timeout_sec": 0, 00:18:08.660 "ddgst": false, 00:18:08.660 "fast_io_fail_timeout_sec": 0, 00:18:08.660 "hdgst": false, 00:18:08.660 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:08.660 "name": "TLSTEST", 00:18:08.660 "prchk_guard": false, 00:18:08.660 "prchk_reftag": false, 00:18:08.660 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:18:08.660 "reconnect_delay_sec": 0, 00:18:08.660 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.660 "traddr": "10.0.0.2", 00:18:08.660 "trsvcid": "4420", 00:18:08.660 "trtype": "TCP" 00:18:08.660 } 00:18:08.660 }, 00:18:08.660 { 00:18:08.660 "method": "bdev_nvme_set_hotplug", 00:18:08.660 "params": { 00:18:08.660 "enable": false, 00:18:08.660 "period_us": 100000 00:18:08.660 } 00:18:08.660 }, 00:18:08.660 { 00:18:08.660 "method": "bdev_wait_for_examine" 00:18:08.660 } 00:18:08.660 ] 00:18:08.660 }, 00:18:08.660 { 00:18:08.660 "subsystem": "nbd", 00:18:08.660 "config": [] 00:18:08.660 } 00:18:08.660 ] 00:18:08.660 }' 00:18:08.660 04:01:43 -- target/tls.sh@208 -- # killprocess 78943 00:18:08.660 04:01:43 -- common/autotest_common.sh@936 -- # '[' -z 78943 ']' 00:18:08.660 04:01:43 -- common/autotest_common.sh@940 -- # kill -0 78943 00:18:08.660 04:01:43 -- common/autotest_common.sh@941 -- # uname 00:18:08.660 04:01:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:08.660 04:01:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78943 00:18:08.660 04:01:43 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:08.660 04:01:43 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:08.660 04:01:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78943' 00:18:08.660 killing process with pid 78943 00:18:08.660 Received shutdown signal, test time was about 10.000000 seconds 00:18:08.660 00:18:08.660 Latency(us) 00:18:08.660 [2024-11-08T04:01:43.771Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.660 [2024-11-08T04:01:43.771Z] =================================================================================================================== 00:18:08.660 [2024-11-08T04:01:43.771Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:08.660 04:01:43 -- common/autotest_common.sh@955 -- # kill 78943 00:18:08.660 04:01:43 -- common/autotest_common.sh@960 -- # wait 78943 00:18:08.920 04:01:43 -- target/tls.sh@209 -- # killprocess 78845 00:18:08.920 04:01:43 -- common/autotest_common.sh@936 -- # '[' -z 78845 ']' 00:18:08.920 04:01:43 -- common/autotest_common.sh@940 -- # kill -0 78845 00:18:08.920 04:01:43 -- common/autotest_common.sh@941 -- # uname 00:18:08.920 04:01:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:08.920 04:01:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78845 00:18:08.920 04:01:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:08.920 04:01:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:08.920 killing process with pid 78845 00:18:08.920 04:01:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78845' 00:18:08.920 04:01:43 -- common/autotest_common.sh@955 -- # kill 78845 00:18:08.920 04:01:43 -- common/autotest_common.sh@960 -- # wait 78845 00:18:09.487 04:01:44 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:09.487 04:01:44 -- target/tls.sh@212 -- # echo '{ 00:18:09.487 "subsystems": [ 00:18:09.487 { 00:18:09.487 "subsystem": "iobuf", 00:18:09.487 "config": [ 00:18:09.487 { 00:18:09.487 "method": "iobuf_set_options", 00:18:09.487 "params": { 00:18:09.487 "large_bufsize": 135168, 00:18:09.487 "large_pool_count": 1024, 00:18:09.487 "small_bufsize": 8192, 00:18:09.487 "small_pool_count": 8192 00:18:09.487 } 00:18:09.487 } 00:18:09.487 ] 00:18:09.487 }, 00:18:09.487 { 00:18:09.487 "subsystem": "sock", 00:18:09.487 "config": [ 00:18:09.487 { 00:18:09.487 "method": "sock_impl_set_options", 00:18:09.487 "params": { 00:18:09.487 "enable_ktls": false, 00:18:09.487 "enable_placement_id": 0, 00:18:09.487 "enable_quickack": false, 00:18:09.487 "enable_recv_pipe": true, 00:18:09.487 "enable_zerocopy_send_client": false, 00:18:09.487 "enable_zerocopy_send_server": true, 00:18:09.487 "impl_name": "posix", 00:18:09.487 "recv_buf_size": 2097152, 00:18:09.488 "send_buf_size": 2097152, 00:18:09.488 "tls_version": 0, 00:18:09.488 "zerocopy_threshold": 0 00:18:09.488 } 00:18:09.488 }, 00:18:09.488 { 00:18:09.488 "method": "sock_impl_set_options", 00:18:09.488 "params": { 00:18:09.488 "enable_ktls": false, 00:18:09.488 "enable_placement_id": 0, 00:18:09.488 "enable_quickack": false, 00:18:09.488 "enable_recv_pipe": true, 00:18:09.488 "enable_zerocopy_send_client": false, 00:18:09.488 "enable_zerocopy_send_server": true, 00:18:09.488 "impl_name": "ssl", 00:18:09.488 "recv_buf_size": 4096, 00:18:09.488 "send_buf_size": 4096, 00:18:09.488 "tls_version": 0, 00:18:09.488 "zerocopy_threshold": 0 00:18:09.488 } 00:18:09.488 } 00:18:09.488 ] 00:18:09.488 }, 00:18:09.488 { 00:18:09.488 "subsystem": "vmd", 00:18:09.488 "config": [] 00:18:09.488 }, 00:18:09.488 { 00:18:09.488 "subsystem": "accel", 00:18:09.488 "config": [ 00:18:09.488 { 00:18:09.488 "method": "accel_set_options", 00:18:09.488 "params": { 00:18:09.488 "buf_count": 2048, 00:18:09.488 "large_cache_size": 16, 00:18:09.488 "sequence_count": 2048, 00:18:09.488 "small_cache_size": 128, 00:18:09.488 "task_count": 2048 00:18:09.488 } 00:18:09.488 } 00:18:09.488 ] 00:18:09.488 }, 00:18:09.488 { 00:18:09.488 "subsystem": "bdev", 00:18:09.488 "config": [ 00:18:09.488 { 00:18:09.488 "method": "bdev_set_options", 00:18:09.488 "params": { 00:18:09.488 "bdev_auto_examine": true, 00:18:09.488 "bdev_io_cache_size": 256, 00:18:09.488 "bdev_io_pool_size": 65535, 00:18:09.488 "iobuf_large_cache_size": 16, 00:18:09.488 "iobuf_small_cache_size": 128 00:18:09.488 } 00:18:09.488 }, 00:18:09.488 { 00:18:09.488 "method": "bdev_raid_set_options", 00:18:09.488 "params": { 00:18:09.488 "process_window_size_kb": 1024 00:18:09.488 } 00:18:09.488 }, 00:18:09.488 { 00:18:09.488 "method": "bdev_iscsi_set_options", 00:18:09.488 "params": { 00:18:09.488 "timeout_sec": 30 00:18:09.488 } 00:18:09.488 }, 00:18:09.488 { 00:18:09.488 "method": "bdev_nvme_set_options", 00:18:09.488 "params": { 00:18:09.488 "action_on_timeout": "none", 00:18:09.488 "allow_accel_sequence": false, 00:18:09.488 "arbitration_burst": 0, 00:18:09.488 "bdev_retry_count": 3, 00:18:09.488 "ctrlr_loss_timeout_sec": 0, 00:18:09.488 "delay_cmd_submit": true, 00:18:09.488 "fast_io_fail_timeout_sec": 0, 00:18:09.488 "generate_uuids": false, 00:18:09.488 "high_priority_weight": 0, 00:18:09.488 "io_path_stat": false, 00:18:09.488 "io_queue_requests": 0, 00:18:09.488 "keep_alive_timeout_ms": 10000, 00:18:09.488 "low_priority_weight": 0, 00:18:09.488 "medium_priority_weight": 0, 00:18:09.488 "nvme_adminq_poll_period_us": 10000, 00:18:09.488 "nvme_ioq_poll_period_us": 0, 00:18:09.488 "reconnect_delay_sec": 0, 00:18:09.488 "timeout_admin_us": 0, 00:18:09.488 "timeout_us": 0, 00:18:09.488 "transport_ack_timeout": 0, 00:18:09.488 "transport_retry_count": 4, 00:18:09.488 "transport_tos": 0 00:18:09.488 } 00:18:09.488 }, 00:18:09.488 { 00:18:09.488 "method": "bdev_nvme_set_hotplug", 00:18:09.488 "params": { 00:18:09.488 "enable": false, 00:18:09.488 "period_us": 100000 00:18:09.488 } 00:18:09.488 }, 00:18:09.488 { 00:18:09.488 "method": "bdev_malloc_create", 00:18:09.488 "params": { 00:18:09.488 "block_size": 4096, 00:18:09.488 "name": "malloc0", 00:18:09.488 "num_blocks": 8192, 00:18:09.488 "optimal_io_boundary": 0, 00:18:09.488 "physical_block_size": 4096, 00:18:09.488 "uuid": "8ba29ed3-0cce-4360-b28c-d8525ecf4a05" 00:18:09.488 } 00:18:09.488 }, 00:18:09.488 { 00:18:09.488 "method": "bdev_wait_for_examine" 00:18:09.488 } 00:18:09.488 ] 00:18:09.488 }, 00:18:09.488 { 00:18:09.488 "subsystem": "nbd", 00:18:09.488 "config": [] 00:18:09.488 }, 00:18:09.488 { 00:18:09.488 "subsystem": "scheduler", 00:18:09.488 "config": [ 00:18:09.488 { 00:18:09.488 "method": "framework_set_scheduler", 00:18:09.488 "params": { 00:18:09.488 "name": "static" 00:18:09.488 } 00:18:09.488 } 00:18:09.488 ] 00:18:09.488 }, 00:18:09.488 { 00:18:09.488 "subsystem": "nvmf", 00:18:09.488 "config": [ 00:18:09.488 { 00:18:09.488 "method": "nvmf_set_config", 00:18:09.488 "params": { 00:18:09.488 "admin_cmd_passthru": { 00:18:09.488 "identify_ctrlr": false 00:18:09.488 }, 00:18:09.488 "discovery_filter": "match_any" 00:18:09.488 } 00:18:09.488 }, 00:18:09.488 { 00:18:09.488 "method": "nvmf_set_max_subsystems", 00:18:09.488 "params": { 00:18:09.488 "max_subsystems": 1024 00:18:09.488 } 00:18:09.488 }, 00:18:09.488 { 00:18:09.488 "method": "nvmf_set_crdt", 00:18:09.488 "params": { 00:18:09.488 "crdt1": 0, 00:18:09.488 "crdt2": 0, 00:18:09.488 "crdt3": 0 00:18:09.488 } 00:18:09.488 }, 00:18:09.488 { 00:18:09.488 "method": "nvmf_create_transport", 00:18:09.488 "params": { 00:18:09.488 "abort_timeout_sec": 1, 00:18:09.488 "buf_cache_size": 4294967295, 00:18:09.488 "c2h_success": false, 00:18:09.488 "dif_insert_or_strip": false, 00:18:09.488 "in_capsule_data_size": 4096, 00:18:09.488 "io_unit_size": 131072, 00:18:09.488 "max_aq_depth": 128, 00:18:09.488 "max_io_qpairs_per_ctrlr": 127, 00:18:09.488 "max_io_size": 131072, 00:18:09.488 "max_queue_depth": 128, 00:18:09.488 "num_shared_buffers": 511, 00:18:09.488 "sock_priority": 0, 00:18:09.488 "trtype": "TCP", 00:18:09.488 "zcopy": false 00:18:09.488 } 00:18:09.488 }, 00:18:09.488 { 00:18:09.488 "method": "nvmf_create_subsystem", 00:18:09.489 "params": { 00:18:09.489 "allow_any_host": false, 00:18:09.489 "ana_reporting": false, 00:18:09.489 "max_cntlid": 65519, 00:18:09.489 "max_namespaces": 10, 00:18:09.489 "min_cntlid": 1, 00:18:09.489 "model_number": "SPDK bdev Controller", 00:18:09.489 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.489 "serial_number": "SPDK00000000000001" 00:18:09.489 } 00:18:09.489 }, 00:18:09.489 { 00:18:09.489 "method": "nvmf_subsystem_add_host", 00:18:09.489 "params": { 00:18:09.489 "host": "nqn.2016-06.io.spdk:host1", 00:18:09.489 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.489 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:18:09.489 } 00:18:09.489 }, 00:18:09.489 { 00:18:09.489 "method": "nvmf_subsystem_add_ns", 00:18:09.489 "params": { 00:18:09.489 "namespace": { 00:18:09.489 "bdev_name": "malloc0", 00:18:09.489 "nguid": "8BA29ED30CCE4360B28CD8525ECF4A05", 00:18:09.489 "nsid": 1, 00:18:09.489 "uuid": "8ba29ed3-0cce-4360-b28c-d8525ecf4a05" 00:18:09.489 }, 00:18:09.489 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:18:09.489 } 00:18:09.489 }, 00:18:09.489 { 00:18:09.489 "method": "nvmf_subsystem_add_listener", 00:18:09.489 "params": { 00:18:09.489 "listen_address": { 00:18:09.489 "adrfam": "IPv4", 00:18:09.489 "traddr": "10.0.0.2", 00:18:09.489 "trsvcid": "4420", 00:18:09.489 "trtype": "TCP" 00:18:09.489 }, 00:18:09.489 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.489 "secure_channel": true 00:18:09.489 } 00:18:09.489 } 00:18:09.489 ] 00:18:09.489 } 00:18:09.489 ] 00:18:09.489 }' 00:18:09.489 04:01:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:09.489 04:01:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:09.489 04:01:44 -- common/autotest_common.sh@10 -- # set +x 00:18:09.489 04:01:44 -- nvmf/common.sh@469 -- # nvmfpid=79022 00:18:09.489 04:01:44 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:09.489 04:01:44 -- nvmf/common.sh@470 -- # waitforlisten 79022 00:18:09.489 04:01:44 -- common/autotest_common.sh@829 -- # '[' -z 79022 ']' 00:18:09.489 04:01:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.489 04:01:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:09.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.489 04:01:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.489 04:01:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:09.489 04:01:44 -- common/autotest_common.sh@10 -- # set +x 00:18:09.489 [2024-11-08 04:01:44.342574] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:09.489 [2024-11-08 04:01:44.342665] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:09.489 [2024-11-08 04:01:44.476842] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.489 [2024-11-08 04:01:44.576796] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:09.489 [2024-11-08 04:01:44.576931] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:09.489 [2024-11-08 04:01:44.576945] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:09.489 [2024-11-08 04:01:44.576954] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:09.489 [2024-11-08 04:01:44.576983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:09.748 [2024-11-08 04:01:44.824623] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:09.748 [2024-11-08 04:01:44.856580] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:09.748 [2024-11-08 04:01:44.856797] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:10.316 04:01:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:10.316 04:01:45 -- common/autotest_common.sh@862 -- # return 0 00:18:10.316 04:01:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:10.316 04:01:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:10.316 04:01:45 -- common/autotest_common.sh@10 -- # set +x 00:18:10.316 04:01:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:10.316 04:01:45 -- target/tls.sh@216 -- # bdevperf_pid=79066 00:18:10.316 04:01:45 -- target/tls.sh@217 -- # waitforlisten 79066 /var/tmp/bdevperf.sock 00:18:10.316 04:01:45 -- common/autotest_common.sh@829 -- # '[' -z 79066 ']' 00:18:10.316 04:01:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:10.316 04:01:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:10.316 04:01:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:10.316 04:01:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:10.316 04:01:45 -- common/autotest_common.sh@10 -- # set +x 00:18:10.316 04:01:45 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:10.316 04:01:45 -- target/tls.sh@213 -- # echo '{ 00:18:10.316 "subsystems": [ 00:18:10.316 { 00:18:10.316 "subsystem": "iobuf", 00:18:10.316 "config": [ 00:18:10.316 { 00:18:10.316 "method": "iobuf_set_options", 00:18:10.316 "params": { 00:18:10.316 "large_bufsize": 135168, 00:18:10.316 "large_pool_count": 1024, 00:18:10.316 "small_bufsize": 8192, 00:18:10.317 "small_pool_count": 8192 00:18:10.317 } 00:18:10.317 } 00:18:10.317 ] 00:18:10.317 }, 00:18:10.317 { 00:18:10.317 "subsystem": "sock", 00:18:10.317 "config": [ 00:18:10.317 { 00:18:10.317 "method": "sock_impl_set_options", 00:18:10.317 "params": { 00:18:10.317 "enable_ktls": false, 00:18:10.317 "enable_placement_id": 0, 00:18:10.317 "enable_quickack": false, 00:18:10.317 "enable_recv_pipe": true, 00:18:10.317 "enable_zerocopy_send_client": false, 00:18:10.317 "enable_zerocopy_send_server": true, 00:18:10.317 "impl_name": "posix", 00:18:10.317 "recv_buf_size": 2097152, 00:18:10.317 "send_buf_size": 2097152, 00:18:10.317 "tls_version": 0, 00:18:10.317 "zerocopy_threshold": 0 00:18:10.317 } 00:18:10.317 }, 00:18:10.317 { 00:18:10.317 "method": "sock_impl_set_options", 00:18:10.317 "params": { 00:18:10.317 "enable_ktls": false, 00:18:10.317 "enable_placement_id": 0, 00:18:10.317 "enable_quickack": false, 00:18:10.317 "enable_recv_pipe": true, 00:18:10.317 "enable_zerocopy_send_client": false, 00:18:10.317 "enable_zerocopy_send_server": true, 00:18:10.317 "impl_name": "ssl", 00:18:10.317 "recv_buf_size": 4096, 00:18:10.317 "send_buf_size": 4096, 00:18:10.317 "tls_version": 0, 00:18:10.317 "zerocopy_threshold": 0 00:18:10.317 } 00:18:10.317 } 00:18:10.317 ] 00:18:10.317 }, 00:18:10.317 { 00:18:10.317 "subsystem": "vmd", 00:18:10.317 "config": [] 00:18:10.317 }, 00:18:10.317 { 00:18:10.317 "subsystem": "accel", 00:18:10.317 "config": [ 00:18:10.317 { 00:18:10.317 "method": "accel_set_options", 00:18:10.317 "params": { 00:18:10.317 "buf_count": 2048, 00:18:10.317 "large_cache_size": 16, 00:18:10.317 "sequence_count": 2048, 00:18:10.317 "small_cache_size": 128, 00:18:10.317 "task_count": 2048 00:18:10.317 } 00:18:10.317 } 00:18:10.317 ] 00:18:10.317 }, 00:18:10.317 { 00:18:10.317 "subsystem": "bdev", 00:18:10.317 "config": [ 00:18:10.317 { 00:18:10.317 "method": "bdev_set_options", 00:18:10.317 "params": { 00:18:10.317 "bdev_auto_examine": true, 00:18:10.317 "bdev_io_cache_size": 256, 00:18:10.317 "bdev_io_pool_size": 65535, 00:18:10.317 "iobuf_large_cache_size": 16, 00:18:10.317 "iobuf_small_cache_size": 128 00:18:10.317 } 00:18:10.317 }, 00:18:10.317 { 00:18:10.317 "method": "bdev_raid_set_options", 00:18:10.317 "params": { 00:18:10.317 "process_window_size_kb": 1024 00:18:10.317 } 00:18:10.317 }, 00:18:10.317 { 00:18:10.317 "method": "bdev_iscsi_set_options", 00:18:10.317 "params": { 00:18:10.317 "timeout_sec": 30 00:18:10.317 } 00:18:10.317 }, 00:18:10.317 { 00:18:10.317 "method": "bdev_nvme_set_options", 00:18:10.317 "params": { 00:18:10.317 "action_on_timeout": "none", 00:18:10.317 "allow_accel_sequence": false, 00:18:10.317 "arbitration_burst": 0, 00:18:10.317 "bdev_retry_count": 3, 00:18:10.317 "ctrlr_loss_timeout_sec": 0, 00:18:10.317 "delay_cmd_submit": true, 00:18:10.317 "fast_io_fail_timeout_sec": 0, 00:18:10.317 "generate_uuids": false, 00:18:10.317 "high_priority_weight": 0, 00:18:10.317 "io_path_stat": false, 00:18:10.317 "io_queue_requests": 512, 00:18:10.317 "keep_alive_timeout_ms": 10000, 00:18:10.317 "low_priority_weight": 0, 00:18:10.317 "medium_priority_weight": 0, 00:18:10.317 "nvme_adminq_poll_period_us": 10000, 00:18:10.317 "nvme_ioq_poll_period_us": 0, 00:18:10.317 "reconnect_delay_sec": 0, 00:18:10.317 "timeout_admin_us": 0, 00:18:10.317 "timeout_us": 0, 00:18:10.317 "transport_ack_timeout": 0, 00:18:10.317 "transport_retry_count": 4, 00:18:10.317 "transport_tos": 0 00:18:10.317 } 00:18:10.317 }, 00:18:10.317 { 00:18:10.317 "method": "bdev_nvme_attach_controller", 00:18:10.317 "params": { 00:18:10.317 "adrfam": "IPv4", 00:18:10.317 "ctrlr_loss_timeout_sec": 0, 00:18:10.317 "ddgst": false, 00:18:10.317 "fast_io_fail_timeout_sec": 0, 00:18:10.317 "hdgst": false, 00:18:10.317 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:10.317 "name": "TLSTEST", 00:18:10.317 "prchk_guard": false, 00:18:10.317 "prchk_reftag": false, 00:18:10.317 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:18:10.317 "reconnect_delay_sec": 0, 00:18:10.317 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:10.317 "traddr": "10.0.0.2", 00:18:10.317 "trsvcid": "4420", 00:18:10.317 "trtype": "TCP" 00:18:10.317 } 00:18:10.317 }, 00:18:10.317 { 00:18:10.317 "method": "bdev_nvme_set_hotplug", 00:18:10.317 "params": { 00:18:10.317 "enable": false, 00:18:10.317 "period_us": 100000 00:18:10.317 } 00:18:10.317 }, 00:18:10.317 { 00:18:10.317 "method": "bdev_wait_for_examine" 00:18:10.317 } 00:18:10.317 ] 00:18:10.317 }, 00:18:10.317 { 00:18:10.317 "subsystem": "nbd", 00:18:10.317 "config": [] 00:18:10.317 } 00:18:10.317 ] 00:18:10.317 }' 00:18:10.317 [2024-11-08 04:01:45.412542] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:10.317 [2024-11-08 04:01:45.412627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79066 ] 00:18:10.577 [2024-11-08 04:01:45.551568] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.577 [2024-11-08 04:01:45.656495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:10.836 [2024-11-08 04:01:45.815186] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:11.404 04:01:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:11.404 04:01:46 -- common/autotest_common.sh@862 -- # return 0 00:18:11.404 04:01:46 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:11.404 Running I/O for 10 seconds... 00:18:21.381 00:18:21.381 Latency(us) 00:18:21.381 [2024-11-08T04:01:56.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.381 [2024-11-08T04:01:56.492Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:21.381 Verification LBA range: start 0x0 length 0x2000 00:18:21.381 TLSTESTn1 : 10.02 5134.87 20.06 0.00 0.00 24883.99 6881.28 24784.52 00:18:21.381 [2024-11-08T04:01:56.492Z] =================================================================================================================== 00:18:21.381 [2024-11-08T04:01:56.492Z] Total : 5134.87 20.06 0.00 0.00 24883.99 6881.28 24784.52 00:18:21.381 0 00:18:21.640 04:01:56 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:21.640 04:01:56 -- target/tls.sh@223 -- # killprocess 79066 00:18:21.640 04:01:56 -- common/autotest_common.sh@936 -- # '[' -z 79066 ']' 00:18:21.640 04:01:56 -- common/autotest_common.sh@940 -- # kill -0 79066 00:18:21.640 04:01:56 -- common/autotest_common.sh@941 -- # uname 00:18:21.640 04:01:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:21.640 04:01:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79066 00:18:21.640 04:01:56 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:21.640 04:01:56 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:21.640 killing process with pid 79066 00:18:21.640 04:01:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79066' 00:18:21.640 Received shutdown signal, test time was about 10.000000 seconds 00:18:21.640 00:18:21.640 Latency(us) 00:18:21.640 [2024-11-08T04:01:56.751Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.640 [2024-11-08T04:01:56.751Z] =================================================================================================================== 00:18:21.640 [2024-11-08T04:01:56.751Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:21.640 04:01:56 -- common/autotest_common.sh@955 -- # kill 79066 00:18:21.640 04:01:56 -- common/autotest_common.sh@960 -- # wait 79066 00:18:21.899 04:01:56 -- target/tls.sh@224 -- # killprocess 79022 00:18:21.899 04:01:56 -- common/autotest_common.sh@936 -- # '[' -z 79022 ']' 00:18:21.899 04:01:56 -- common/autotest_common.sh@940 -- # kill -0 79022 00:18:21.899 04:01:56 -- common/autotest_common.sh@941 -- # uname 00:18:21.899 04:01:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:21.899 04:01:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79022 00:18:21.899 04:01:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:21.899 04:01:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:21.899 killing process with pid 79022 00:18:21.899 04:01:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79022' 00:18:21.899 04:01:56 -- common/autotest_common.sh@955 -- # kill 79022 00:18:21.899 04:01:56 -- common/autotest_common.sh@960 -- # wait 79022 00:18:22.158 04:01:57 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:18:22.158 04:01:57 -- target/tls.sh@227 -- # cleanup 00:18:22.158 04:01:57 -- target/tls.sh@15 -- # process_shm --id 0 00:18:22.158 04:01:57 -- common/autotest_common.sh@806 -- # type=--id 00:18:22.158 04:01:57 -- common/autotest_common.sh@807 -- # id=0 00:18:22.158 04:01:57 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:22.158 04:01:57 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:22.158 04:01:57 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:22.158 04:01:57 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:22.158 04:01:57 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:22.158 04:01:57 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:22.158 nvmf_trace.0 00:18:22.158 04:01:57 -- common/autotest_common.sh@821 -- # return 0 00:18:22.158 04:01:57 -- target/tls.sh@16 -- # killprocess 79066 00:18:22.158 04:01:57 -- common/autotest_common.sh@936 -- # '[' -z 79066 ']' 00:18:22.158 04:01:57 -- common/autotest_common.sh@940 -- # kill -0 79066 00:18:22.158 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (79066) - No such process 00:18:22.158 Process with pid 79066 is not found 00:18:22.158 04:01:57 -- common/autotest_common.sh@963 -- # echo 'Process with pid 79066 is not found' 00:18:22.158 04:01:57 -- target/tls.sh@17 -- # nvmftestfini 00:18:22.158 04:01:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:22.158 04:01:57 -- nvmf/common.sh@116 -- # sync 00:18:22.158 04:01:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:22.158 04:01:57 -- nvmf/common.sh@119 -- # set +e 00:18:22.158 04:01:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:22.158 04:01:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:22.158 rmmod nvme_tcp 00:18:22.158 rmmod nvme_fabrics 00:18:22.158 rmmod nvme_keyring 00:18:22.416 04:01:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:22.416 04:01:57 -- nvmf/common.sh@123 -- # set -e 00:18:22.416 04:01:57 -- nvmf/common.sh@124 -- # return 0 00:18:22.416 04:01:57 -- nvmf/common.sh@477 -- # '[' -n 79022 ']' 00:18:22.416 04:01:57 -- nvmf/common.sh@478 -- # killprocess 79022 00:18:22.416 04:01:57 -- common/autotest_common.sh@936 -- # '[' -z 79022 ']' 00:18:22.416 04:01:57 -- common/autotest_common.sh@940 -- # kill -0 79022 00:18:22.416 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (79022) - No such process 00:18:22.416 Process with pid 79022 is not found 00:18:22.416 04:01:57 -- common/autotest_common.sh@963 -- # echo 'Process with pid 79022 is not found' 00:18:22.416 04:01:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:22.416 04:01:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:22.416 04:01:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:22.416 04:01:57 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:22.416 04:01:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:22.416 04:01:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.416 04:01:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:22.416 04:01:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.417 04:01:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:22.417 04:01:57 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:22.417 00:18:22.417 real 1m11.632s 00:18:22.417 user 1m46.512s 00:18:22.417 sys 0m27.291s 00:18:22.417 04:01:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:22.417 04:01:57 -- common/autotest_common.sh@10 -- # set +x 00:18:22.417 ************************************ 00:18:22.417 END TEST nvmf_tls 00:18:22.417 ************************************ 00:18:22.417 04:01:57 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:22.417 04:01:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:22.417 04:01:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:22.417 04:01:57 -- common/autotest_common.sh@10 -- # set +x 00:18:22.417 ************************************ 00:18:22.417 START TEST nvmf_fips 00:18:22.417 ************************************ 00:18:22.417 04:01:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:22.417 * Looking for test storage... 00:18:22.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:18:22.417 04:01:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:22.417 04:01:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:22.417 04:01:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:22.676 04:01:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:22.676 04:01:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:22.676 04:01:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:22.676 04:01:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:22.676 04:01:57 -- scripts/common.sh@335 -- # IFS=.-: 00:18:22.676 04:01:57 -- scripts/common.sh@335 -- # read -ra ver1 00:18:22.676 04:01:57 -- scripts/common.sh@336 -- # IFS=.-: 00:18:22.676 04:01:57 -- scripts/common.sh@336 -- # read -ra ver2 00:18:22.676 04:01:57 -- scripts/common.sh@337 -- # local 'op=<' 00:18:22.676 04:01:57 -- scripts/common.sh@339 -- # ver1_l=2 00:18:22.676 04:01:57 -- scripts/common.sh@340 -- # ver2_l=1 00:18:22.676 04:01:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:22.676 04:01:57 -- scripts/common.sh@343 -- # case "$op" in 00:18:22.676 04:01:57 -- scripts/common.sh@344 -- # : 1 00:18:22.676 04:01:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:22.676 04:01:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:22.676 04:01:57 -- scripts/common.sh@364 -- # decimal 1 00:18:22.676 04:01:57 -- scripts/common.sh@352 -- # local d=1 00:18:22.676 04:01:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:22.676 04:01:57 -- scripts/common.sh@354 -- # echo 1 00:18:22.676 04:01:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:22.676 04:01:57 -- scripts/common.sh@365 -- # decimal 2 00:18:22.676 04:01:57 -- scripts/common.sh@352 -- # local d=2 00:18:22.676 04:01:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:22.676 04:01:57 -- scripts/common.sh@354 -- # echo 2 00:18:22.676 04:01:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:22.676 04:01:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:22.676 04:01:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:22.676 04:01:57 -- scripts/common.sh@367 -- # return 0 00:18:22.676 04:01:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:22.676 04:01:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:22.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.676 --rc genhtml_branch_coverage=1 00:18:22.676 --rc genhtml_function_coverage=1 00:18:22.676 --rc genhtml_legend=1 00:18:22.676 --rc geninfo_all_blocks=1 00:18:22.676 --rc geninfo_unexecuted_blocks=1 00:18:22.676 00:18:22.676 ' 00:18:22.676 04:01:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:22.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.676 --rc genhtml_branch_coverage=1 00:18:22.676 --rc genhtml_function_coverage=1 00:18:22.676 --rc genhtml_legend=1 00:18:22.676 --rc geninfo_all_blocks=1 00:18:22.676 --rc geninfo_unexecuted_blocks=1 00:18:22.676 00:18:22.677 ' 00:18:22.677 04:01:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:22.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.677 --rc genhtml_branch_coverage=1 00:18:22.677 --rc genhtml_function_coverage=1 00:18:22.677 --rc genhtml_legend=1 00:18:22.677 --rc geninfo_all_blocks=1 00:18:22.677 --rc geninfo_unexecuted_blocks=1 00:18:22.677 00:18:22.677 ' 00:18:22.677 04:01:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:22.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.677 --rc genhtml_branch_coverage=1 00:18:22.677 --rc genhtml_function_coverage=1 00:18:22.677 --rc genhtml_legend=1 00:18:22.677 --rc geninfo_all_blocks=1 00:18:22.677 --rc geninfo_unexecuted_blocks=1 00:18:22.677 00:18:22.677 ' 00:18:22.677 04:01:57 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:22.677 04:01:57 -- nvmf/common.sh@7 -- # uname -s 00:18:22.677 04:01:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:22.677 04:01:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:22.677 04:01:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:22.677 04:01:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:22.677 04:01:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:22.677 04:01:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:22.677 04:01:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:22.677 04:01:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:22.677 04:01:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:22.677 04:01:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:22.677 04:01:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:18:22.677 04:01:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:18:22.677 04:01:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:22.677 04:01:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:22.677 04:01:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:22.677 04:01:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:22.677 04:01:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:22.677 04:01:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:22.677 04:01:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:22.677 04:01:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.677 04:01:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.677 04:01:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.677 04:01:57 -- paths/export.sh@5 -- # export PATH 00:18:22.677 04:01:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.677 04:01:57 -- nvmf/common.sh@46 -- # : 0 00:18:22.677 04:01:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:22.677 04:01:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:22.677 04:01:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:22.677 04:01:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:22.677 04:01:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:22.677 04:01:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:22.677 04:01:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:22.677 04:01:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:22.677 04:01:57 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:22.677 04:01:57 -- fips/fips.sh@89 -- # check_openssl_version 00:18:22.677 04:01:57 -- fips/fips.sh@83 -- # local target=3.0.0 00:18:22.677 04:01:57 -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:22.677 04:01:57 -- fips/fips.sh@85 -- # openssl version 00:18:22.677 04:01:57 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:18:22.677 04:01:57 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:22.677 04:01:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:22.677 04:01:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:22.677 04:01:57 -- scripts/common.sh@335 -- # IFS=.-: 00:18:22.677 04:01:57 -- scripts/common.sh@335 -- # read -ra ver1 00:18:22.677 04:01:57 -- scripts/common.sh@336 -- # IFS=.-: 00:18:22.677 04:01:57 -- scripts/common.sh@336 -- # read -ra ver2 00:18:22.677 04:01:57 -- scripts/common.sh@337 -- # local 'op=>=' 00:18:22.677 04:01:57 -- scripts/common.sh@339 -- # ver1_l=3 00:18:22.677 04:01:57 -- scripts/common.sh@340 -- # ver2_l=3 00:18:22.677 04:01:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:22.677 04:01:57 -- scripts/common.sh@343 -- # case "$op" in 00:18:22.677 04:01:57 -- scripts/common.sh@347 -- # : 1 00:18:22.677 04:01:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:22.677 04:01:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:22.677 04:01:57 -- scripts/common.sh@364 -- # decimal 3 00:18:22.677 04:01:57 -- scripts/common.sh@352 -- # local d=3 00:18:22.677 04:01:57 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:22.677 04:01:57 -- scripts/common.sh@354 -- # echo 3 00:18:22.677 04:01:57 -- scripts/common.sh@364 -- # ver1[v]=3 00:18:22.677 04:01:57 -- scripts/common.sh@365 -- # decimal 3 00:18:22.677 04:01:57 -- scripts/common.sh@352 -- # local d=3 00:18:22.677 04:01:57 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:22.677 04:01:57 -- scripts/common.sh@354 -- # echo 3 00:18:22.677 04:01:57 -- scripts/common.sh@365 -- # ver2[v]=3 00:18:22.677 04:01:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:22.677 04:01:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:22.677 04:01:57 -- scripts/common.sh@363 -- # (( v++ )) 00:18:22.677 04:01:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:22.677 04:01:57 -- scripts/common.sh@364 -- # decimal 1 00:18:22.677 04:01:57 -- scripts/common.sh@352 -- # local d=1 00:18:22.677 04:01:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:22.677 04:01:57 -- scripts/common.sh@354 -- # echo 1 00:18:22.677 04:01:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:22.677 04:01:57 -- scripts/common.sh@365 -- # decimal 0 00:18:22.677 04:01:57 -- scripts/common.sh@352 -- # local d=0 00:18:22.677 04:01:57 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:22.677 04:01:57 -- scripts/common.sh@354 -- # echo 0 00:18:22.677 04:01:57 -- scripts/common.sh@365 -- # ver2[v]=0 00:18:22.677 04:01:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:22.677 04:01:57 -- scripts/common.sh@366 -- # return 0 00:18:22.677 04:01:57 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:22.677 04:01:57 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:22.677 04:01:57 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:22.677 04:01:57 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:22.677 04:01:57 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:22.677 04:01:57 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:22.677 04:01:57 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:22.677 04:01:57 -- fips/fips.sh@113 -- # build_openssl_config 00:18:22.677 04:01:57 -- fips/fips.sh@37 -- # cat 00:18:22.677 04:01:57 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:22.677 04:01:57 -- fips/fips.sh@58 -- # cat - 00:18:22.677 04:01:57 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:22.677 04:01:57 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:22.677 04:01:57 -- fips/fips.sh@116 -- # mapfile -t providers 00:18:22.677 04:01:57 -- fips/fips.sh@116 -- # openssl list -providers 00:18:22.677 04:01:57 -- fips/fips.sh@116 -- # grep name 00:18:22.677 04:01:57 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:22.677 04:01:57 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:22.677 04:01:57 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:22.677 04:01:57 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:22.677 04:01:57 -- fips/fips.sh@127 -- # : 00:18:22.677 04:01:57 -- common/autotest_common.sh@650 -- # local es=0 00:18:22.677 04:01:57 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:22.677 04:01:57 -- common/autotest_common.sh@638 -- # local arg=openssl 00:18:22.677 04:01:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:22.677 04:01:57 -- common/autotest_common.sh@642 -- # type -t openssl 00:18:22.677 04:01:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:22.677 04:01:57 -- common/autotest_common.sh@644 -- # type -P openssl 00:18:22.677 04:01:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:22.677 04:01:57 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:18:22.677 04:01:57 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:18:22.677 04:01:57 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:18:22.677 Error setting digest 00:18:22.677 40B2DA86A77F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:22.677 40B2DA86A77F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:22.677 04:01:57 -- common/autotest_common.sh@653 -- # es=1 00:18:22.677 04:01:57 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:22.677 04:01:57 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:22.678 04:01:57 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:22.678 04:01:57 -- fips/fips.sh@130 -- # nvmftestinit 00:18:22.678 04:01:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:22.678 04:01:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:22.678 04:01:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:22.678 04:01:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:22.678 04:01:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:22.678 04:01:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.678 04:01:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:22.678 04:01:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.678 04:01:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:22.678 04:01:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:22.678 04:01:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:22.678 04:01:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:22.678 04:01:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:22.678 04:01:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:22.678 04:01:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:22.678 04:01:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:22.678 04:01:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:22.678 04:01:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:22.678 04:01:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:22.678 04:01:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:22.678 04:01:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:22.678 04:01:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:22.678 04:01:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:22.678 04:01:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:22.678 04:01:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:22.678 04:01:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:22.678 04:01:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:22.678 04:01:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:22.678 Cannot find device "nvmf_tgt_br" 00:18:22.678 04:01:57 -- nvmf/common.sh@154 -- # true 00:18:22.678 04:01:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:22.678 Cannot find device "nvmf_tgt_br2" 00:18:22.678 04:01:57 -- nvmf/common.sh@155 -- # true 00:18:22.678 04:01:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:22.678 04:01:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:22.936 Cannot find device "nvmf_tgt_br" 00:18:22.936 04:01:57 -- nvmf/common.sh@157 -- # true 00:18:22.936 04:01:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:22.936 Cannot find device "nvmf_tgt_br2" 00:18:22.936 04:01:57 -- nvmf/common.sh@158 -- # true 00:18:22.936 04:01:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:22.936 04:01:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:22.936 04:01:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:22.936 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:22.936 04:01:57 -- nvmf/common.sh@161 -- # true 00:18:22.936 04:01:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:22.936 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:22.936 04:01:57 -- nvmf/common.sh@162 -- # true 00:18:22.936 04:01:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:22.936 04:01:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:22.936 04:01:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:22.936 04:01:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:22.936 04:01:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:22.936 04:01:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:22.936 04:01:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:22.936 04:01:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:22.936 04:01:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:22.936 04:01:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:22.936 04:01:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:22.936 04:01:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:22.936 04:01:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:22.936 04:01:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:22.936 04:01:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:22.936 04:01:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:22.936 04:01:58 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:22.936 04:01:58 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:22.936 04:01:58 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:22.936 04:01:58 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:22.936 04:01:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:23.195 04:01:58 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:23.195 04:01:58 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:23.195 04:01:58 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:23.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:23.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:18:23.195 00:18:23.195 --- 10.0.0.2 ping statistics --- 00:18:23.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.195 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:18:23.195 04:01:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:23.195 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:23.195 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:18:23.195 00:18:23.195 --- 10.0.0.3 ping statistics --- 00:18:23.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.195 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:18:23.195 04:01:58 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:23.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:23.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:18:23.195 00:18:23.195 --- 10.0.0.1 ping statistics --- 00:18:23.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.195 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:18:23.195 04:01:58 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:23.195 04:01:58 -- nvmf/common.sh@421 -- # return 0 00:18:23.195 04:01:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:23.195 04:01:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:23.195 04:01:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:23.195 04:01:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:23.195 04:01:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:23.195 04:01:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:23.195 04:01:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:23.195 04:01:58 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:23.195 04:01:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:23.195 04:01:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:23.195 04:01:58 -- common/autotest_common.sh@10 -- # set +x 00:18:23.195 04:01:58 -- nvmf/common.sh@469 -- # nvmfpid=79429 00:18:23.195 04:01:58 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:23.195 04:01:58 -- nvmf/common.sh@470 -- # waitforlisten 79429 00:18:23.195 04:01:58 -- common/autotest_common.sh@829 -- # '[' -z 79429 ']' 00:18:23.195 04:01:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.195 04:01:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:23.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.195 04:01:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.195 04:01:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:23.195 04:01:58 -- common/autotest_common.sh@10 -- # set +x 00:18:23.195 [2024-11-08 04:01:58.185172] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:23.196 [2024-11-08 04:01:58.185272] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:23.454 [2024-11-08 04:01:58.325411] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.454 [2024-11-08 04:01:58.415176] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:23.454 [2024-11-08 04:01:58.415309] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:23.454 [2024-11-08 04:01:58.415324] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:23.454 [2024-11-08 04:01:58.415332] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:23.454 [2024-11-08 04:01:58.415365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:24.390 04:01:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:24.390 04:01:59 -- common/autotest_common.sh@862 -- # return 0 00:18:24.390 04:01:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:24.390 04:01:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:24.390 04:01:59 -- common/autotest_common.sh@10 -- # set +x 00:18:24.390 04:01:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:24.390 04:01:59 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:24.390 04:01:59 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:24.390 04:01:59 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:24.390 04:01:59 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:24.390 04:01:59 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:24.390 04:01:59 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:24.390 04:01:59 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:24.390 04:01:59 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:24.390 [2024-11-08 04:01:59.448222] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:24.390 [2024-11-08 04:01:59.464195] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:24.390 [2024-11-08 04:01:59.464395] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:24.649 malloc0 00:18:24.649 04:01:59 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:24.649 04:01:59 -- fips/fips.sh@147 -- # bdevperf_pid=79487 00:18:24.649 04:01:59 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:24.649 04:01:59 -- fips/fips.sh@148 -- # waitforlisten 79487 /var/tmp/bdevperf.sock 00:18:24.649 04:01:59 -- common/autotest_common.sh@829 -- # '[' -z 79487 ']' 00:18:24.649 04:01:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:24.649 04:01:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:24.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:24.649 04:01:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:24.649 04:01:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:24.649 04:01:59 -- common/autotest_common.sh@10 -- # set +x 00:18:24.649 [2024-11-08 04:01:59.607896] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:24.649 [2024-11-08 04:01:59.607984] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79487 ] 00:18:24.649 [2024-11-08 04:01:59.747507] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.907 [2024-11-08 04:01:59.847036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:25.880 04:02:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:25.880 04:02:00 -- common/autotest_common.sh@862 -- # return 0 00:18:25.880 04:02:00 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:25.880 [2024-11-08 04:02:00.834619] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:25.880 TLSTESTn1 00:18:25.880 04:02:00 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:26.139 Running I/O for 10 seconds... 00:18:36.109 00:18:36.109 Latency(us) 00:18:36.109 [2024-11-08T04:02:11.220Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.109 [2024-11-08T04:02:11.220Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:36.109 Verification LBA range: start 0x0 length 0x2000 00:18:36.109 TLSTESTn1 : 10.02 5001.98 19.54 0.00 0.00 25545.46 5153.51 22401.40 00:18:36.109 [2024-11-08T04:02:11.220Z] =================================================================================================================== 00:18:36.109 [2024-11-08T04:02:11.220Z] Total : 5001.98 19.54 0.00 0.00 25545.46 5153.51 22401.40 00:18:36.109 0 00:18:36.109 04:02:11 -- fips/fips.sh@1 -- # cleanup 00:18:36.109 04:02:11 -- fips/fips.sh@15 -- # process_shm --id 0 00:18:36.109 04:02:11 -- common/autotest_common.sh@806 -- # type=--id 00:18:36.109 04:02:11 -- common/autotest_common.sh@807 -- # id=0 00:18:36.109 04:02:11 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:36.109 04:02:11 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:36.109 04:02:11 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:36.109 04:02:11 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:36.109 04:02:11 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:36.109 04:02:11 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:36.109 nvmf_trace.0 00:18:36.109 04:02:11 -- common/autotest_common.sh@821 -- # return 0 00:18:36.109 04:02:11 -- fips/fips.sh@16 -- # killprocess 79487 00:18:36.109 04:02:11 -- common/autotest_common.sh@936 -- # '[' -z 79487 ']' 00:18:36.109 04:02:11 -- common/autotest_common.sh@940 -- # kill -0 79487 00:18:36.109 04:02:11 -- common/autotest_common.sh@941 -- # uname 00:18:36.109 04:02:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:36.109 04:02:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79487 00:18:36.109 04:02:11 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:36.109 04:02:11 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:36.109 killing process with pid 79487 00:18:36.109 04:02:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79487' 00:18:36.109 Received shutdown signal, test time was about 10.000000 seconds 00:18:36.109 00:18:36.109 Latency(us) 00:18:36.109 [2024-11-08T04:02:11.220Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.109 [2024-11-08T04:02:11.220Z] =================================================================================================================== 00:18:36.109 [2024-11-08T04:02:11.220Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:36.109 04:02:11 -- common/autotest_common.sh@955 -- # kill 79487 00:18:36.109 04:02:11 -- common/autotest_common.sh@960 -- # wait 79487 00:18:36.368 04:02:11 -- fips/fips.sh@17 -- # nvmftestfini 00:18:36.368 04:02:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:36.368 04:02:11 -- nvmf/common.sh@116 -- # sync 00:18:36.627 04:02:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:36.627 04:02:11 -- nvmf/common.sh@119 -- # set +e 00:18:36.627 04:02:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:36.627 04:02:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:36.627 rmmod nvme_tcp 00:18:36.627 rmmod nvme_fabrics 00:18:36.627 rmmod nvme_keyring 00:18:36.627 04:02:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:36.627 04:02:11 -- nvmf/common.sh@123 -- # set -e 00:18:36.627 04:02:11 -- nvmf/common.sh@124 -- # return 0 00:18:36.627 04:02:11 -- nvmf/common.sh@477 -- # '[' -n 79429 ']' 00:18:36.627 04:02:11 -- nvmf/common.sh@478 -- # killprocess 79429 00:18:36.627 04:02:11 -- common/autotest_common.sh@936 -- # '[' -z 79429 ']' 00:18:36.627 04:02:11 -- common/autotest_common.sh@940 -- # kill -0 79429 00:18:36.627 04:02:11 -- common/autotest_common.sh@941 -- # uname 00:18:36.627 04:02:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:36.627 04:02:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79429 00:18:36.627 04:02:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:36.627 04:02:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:36.627 04:02:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79429' 00:18:36.627 killing process with pid 79429 00:18:36.627 04:02:11 -- common/autotest_common.sh@955 -- # kill 79429 00:18:36.627 04:02:11 -- common/autotest_common.sh@960 -- # wait 79429 00:18:36.885 04:02:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:36.885 04:02:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:36.885 04:02:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:36.885 04:02:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:36.885 04:02:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:36.885 04:02:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.885 04:02:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:36.885 04:02:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.885 04:02:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:36.885 04:02:11 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:36.885 00:18:36.885 real 0m14.589s 00:18:36.885 user 0m19.052s 00:18:36.885 sys 0m6.320s 00:18:36.885 04:02:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:36.885 04:02:11 -- common/autotest_common.sh@10 -- # set +x 00:18:36.885 ************************************ 00:18:36.885 END TEST nvmf_fips 00:18:36.885 ************************************ 00:18:37.143 04:02:11 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:18:37.143 04:02:11 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:37.143 04:02:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:37.143 04:02:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:37.143 04:02:11 -- common/autotest_common.sh@10 -- # set +x 00:18:37.143 ************************************ 00:18:37.143 START TEST nvmf_fuzz 00:18:37.143 ************************************ 00:18:37.143 04:02:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:37.143 * Looking for test storage... 00:18:37.143 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:37.143 04:02:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:37.143 04:02:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:37.143 04:02:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:37.143 04:02:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:37.143 04:02:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:37.143 04:02:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:37.143 04:02:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:37.143 04:02:12 -- scripts/common.sh@335 -- # IFS=.-: 00:18:37.143 04:02:12 -- scripts/common.sh@335 -- # read -ra ver1 00:18:37.143 04:02:12 -- scripts/common.sh@336 -- # IFS=.-: 00:18:37.143 04:02:12 -- scripts/common.sh@336 -- # read -ra ver2 00:18:37.143 04:02:12 -- scripts/common.sh@337 -- # local 'op=<' 00:18:37.143 04:02:12 -- scripts/common.sh@339 -- # ver1_l=2 00:18:37.143 04:02:12 -- scripts/common.sh@340 -- # ver2_l=1 00:18:37.143 04:02:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:37.143 04:02:12 -- scripts/common.sh@343 -- # case "$op" in 00:18:37.143 04:02:12 -- scripts/common.sh@344 -- # : 1 00:18:37.143 04:02:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:37.143 04:02:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:37.143 04:02:12 -- scripts/common.sh@364 -- # decimal 1 00:18:37.143 04:02:12 -- scripts/common.sh@352 -- # local d=1 00:18:37.143 04:02:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:37.143 04:02:12 -- scripts/common.sh@354 -- # echo 1 00:18:37.143 04:02:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:37.143 04:02:12 -- scripts/common.sh@365 -- # decimal 2 00:18:37.143 04:02:12 -- scripts/common.sh@352 -- # local d=2 00:18:37.143 04:02:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:37.143 04:02:12 -- scripts/common.sh@354 -- # echo 2 00:18:37.143 04:02:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:37.143 04:02:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:37.143 04:02:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:37.143 04:02:12 -- scripts/common.sh@367 -- # return 0 00:18:37.143 04:02:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:37.143 04:02:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:37.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.143 --rc genhtml_branch_coverage=1 00:18:37.143 --rc genhtml_function_coverage=1 00:18:37.143 --rc genhtml_legend=1 00:18:37.143 --rc geninfo_all_blocks=1 00:18:37.143 --rc geninfo_unexecuted_blocks=1 00:18:37.143 00:18:37.143 ' 00:18:37.143 04:02:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:37.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.143 --rc genhtml_branch_coverage=1 00:18:37.143 --rc genhtml_function_coverage=1 00:18:37.143 --rc genhtml_legend=1 00:18:37.143 --rc geninfo_all_blocks=1 00:18:37.143 --rc geninfo_unexecuted_blocks=1 00:18:37.143 00:18:37.143 ' 00:18:37.143 04:02:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:37.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.143 --rc genhtml_branch_coverage=1 00:18:37.143 --rc genhtml_function_coverage=1 00:18:37.143 --rc genhtml_legend=1 00:18:37.143 --rc geninfo_all_blocks=1 00:18:37.143 --rc geninfo_unexecuted_blocks=1 00:18:37.143 00:18:37.143 ' 00:18:37.143 04:02:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:37.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.143 --rc genhtml_branch_coverage=1 00:18:37.143 --rc genhtml_function_coverage=1 00:18:37.143 --rc genhtml_legend=1 00:18:37.143 --rc geninfo_all_blocks=1 00:18:37.143 --rc geninfo_unexecuted_blocks=1 00:18:37.143 00:18:37.143 ' 00:18:37.143 04:02:12 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:37.143 04:02:12 -- nvmf/common.sh@7 -- # uname -s 00:18:37.143 04:02:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:37.143 04:02:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:37.143 04:02:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:37.143 04:02:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:37.143 04:02:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:37.143 04:02:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:37.143 04:02:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:37.143 04:02:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:37.143 04:02:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:37.143 04:02:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:37.143 04:02:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:18:37.143 04:02:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:18:37.143 04:02:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:37.143 04:02:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:37.143 04:02:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:37.143 04:02:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:37.143 04:02:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:37.143 04:02:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:37.143 04:02:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:37.144 04:02:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.144 04:02:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.144 04:02:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.144 04:02:12 -- paths/export.sh@5 -- # export PATH 00:18:37.144 04:02:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.144 04:02:12 -- nvmf/common.sh@46 -- # : 0 00:18:37.144 04:02:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:37.144 04:02:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:37.144 04:02:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:37.144 04:02:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:37.144 04:02:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:37.144 04:02:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:37.144 04:02:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:37.144 04:02:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:37.144 04:02:12 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:18:37.144 04:02:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:37.144 04:02:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:37.144 04:02:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:37.144 04:02:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:37.144 04:02:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:37.144 04:02:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.144 04:02:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:37.144 04:02:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.144 04:02:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:37.144 04:02:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:37.144 04:02:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:37.144 04:02:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:37.144 04:02:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:37.144 04:02:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:37.144 04:02:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:37.144 04:02:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:37.144 04:02:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:37.144 04:02:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:37.144 04:02:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:37.144 04:02:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:37.144 04:02:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:37.144 04:02:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:37.144 04:02:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:37.144 04:02:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:37.144 04:02:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:37.144 04:02:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:37.144 04:02:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:37.144 04:02:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:37.402 Cannot find device "nvmf_tgt_br" 00:18:37.402 04:02:12 -- nvmf/common.sh@154 -- # true 00:18:37.402 04:02:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:37.402 Cannot find device "nvmf_tgt_br2" 00:18:37.402 04:02:12 -- nvmf/common.sh@155 -- # true 00:18:37.402 04:02:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:37.402 04:02:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:37.402 Cannot find device "nvmf_tgt_br" 00:18:37.402 04:02:12 -- nvmf/common.sh@157 -- # true 00:18:37.402 04:02:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:37.402 Cannot find device "nvmf_tgt_br2" 00:18:37.402 04:02:12 -- nvmf/common.sh@158 -- # true 00:18:37.402 04:02:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:37.402 04:02:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:37.402 04:02:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:37.402 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:37.402 04:02:12 -- nvmf/common.sh@161 -- # true 00:18:37.402 04:02:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:37.402 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:37.402 04:02:12 -- nvmf/common.sh@162 -- # true 00:18:37.402 04:02:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:37.403 04:02:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:37.403 04:02:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:37.403 04:02:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:37.403 04:02:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:37.403 04:02:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:37.403 04:02:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:37.403 04:02:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:37.403 04:02:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:37.403 04:02:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:37.403 04:02:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:37.403 04:02:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:37.403 04:02:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:37.403 04:02:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:37.403 04:02:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:37.403 04:02:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:37.403 04:02:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:37.403 04:02:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:37.403 04:02:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:37.403 04:02:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:37.661 04:02:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:37.661 04:02:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:37.661 04:02:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:37.661 04:02:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:37.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:37.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:18:37.661 00:18:37.661 --- 10.0.0.2 ping statistics --- 00:18:37.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.661 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:18:37.661 04:02:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:37.661 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:37.661 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:18:37.661 00:18:37.661 --- 10.0.0.3 ping statistics --- 00:18:37.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.661 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:18:37.661 04:02:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:37.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:37.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:18:37.661 00:18:37.661 --- 10.0.0.1 ping statistics --- 00:18:37.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.661 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:18:37.661 04:02:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:37.661 04:02:12 -- nvmf/common.sh@421 -- # return 0 00:18:37.661 04:02:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:37.661 04:02:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:37.661 04:02:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:37.661 04:02:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:37.661 04:02:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:37.661 04:02:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:37.661 04:02:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:37.661 04:02:12 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=79836 00:18:37.661 04:02:12 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:37.661 04:02:12 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:37.661 04:02:12 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 79836 00:18:37.661 04:02:12 -- common/autotest_common.sh@829 -- # '[' -z 79836 ']' 00:18:37.661 04:02:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.661 04:02:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:37.661 04:02:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.661 04:02:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:37.661 04:02:12 -- common/autotest_common.sh@10 -- # set +x 00:18:38.596 04:02:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:38.596 04:02:13 -- common/autotest_common.sh@862 -- # return 0 00:18:38.596 04:02:13 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:38.596 04:02:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.596 04:02:13 -- common/autotest_common.sh@10 -- # set +x 00:18:38.596 04:02:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.596 04:02:13 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:18:38.596 04:02:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.596 04:02:13 -- common/autotest_common.sh@10 -- # set +x 00:18:38.596 Malloc0 00:18:38.596 04:02:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.596 04:02:13 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:38.596 04:02:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.596 04:02:13 -- common/autotest_common.sh@10 -- # set +x 00:18:38.855 04:02:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.855 04:02:13 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:38.855 04:02:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.855 04:02:13 -- common/autotest_common.sh@10 -- # set +x 00:18:38.855 04:02:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.855 04:02:13 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:38.855 04:02:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.855 04:02:13 -- common/autotest_common.sh@10 -- # set +x 00:18:38.855 04:02:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.855 04:02:13 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:18:38.855 04:02:13 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:18:39.114 Shutting down the fuzz application 00:18:39.114 04:02:14 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:18:39.372 Shutting down the fuzz application 00:18:39.372 04:02:14 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:39.372 04:02:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.372 04:02:14 -- common/autotest_common.sh@10 -- # set +x 00:18:39.372 04:02:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.372 04:02:14 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:18:39.372 04:02:14 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:18:39.372 04:02:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:39.372 04:02:14 -- nvmf/common.sh@116 -- # sync 00:18:39.631 04:02:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:39.631 04:02:14 -- nvmf/common.sh@119 -- # set +e 00:18:39.631 04:02:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:39.631 04:02:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:39.631 rmmod nvme_tcp 00:18:39.631 rmmod nvme_fabrics 00:18:39.631 rmmod nvme_keyring 00:18:39.631 04:02:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:39.631 04:02:14 -- nvmf/common.sh@123 -- # set -e 00:18:39.631 04:02:14 -- nvmf/common.sh@124 -- # return 0 00:18:39.631 04:02:14 -- nvmf/common.sh@477 -- # '[' -n 79836 ']' 00:18:39.631 04:02:14 -- nvmf/common.sh@478 -- # killprocess 79836 00:18:39.631 04:02:14 -- common/autotest_common.sh@936 -- # '[' -z 79836 ']' 00:18:39.631 04:02:14 -- common/autotest_common.sh@940 -- # kill -0 79836 00:18:39.631 04:02:14 -- common/autotest_common.sh@941 -- # uname 00:18:39.631 04:02:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:39.631 04:02:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79836 00:18:39.631 killing process with pid 79836 00:18:39.631 04:02:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:39.631 04:02:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:39.631 04:02:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79836' 00:18:39.631 04:02:14 -- common/autotest_common.sh@955 -- # kill 79836 00:18:39.631 04:02:14 -- common/autotest_common.sh@960 -- # wait 79836 00:18:39.891 04:02:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:39.891 04:02:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:39.891 04:02:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:39.891 04:02:14 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:39.891 04:02:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:39.891 04:02:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.891 04:02:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:39.891 04:02:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.891 04:02:14 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:39.891 04:02:14 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:18:39.891 ************************************ 00:18:39.891 END TEST nvmf_fuzz 00:18:39.891 ************************************ 00:18:39.891 00:18:39.891 real 0m2.881s 00:18:39.891 user 0m3.053s 00:18:39.891 sys 0m0.687s 00:18:39.891 04:02:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:39.891 04:02:14 -- common/autotest_common.sh@10 -- # set +x 00:18:39.891 04:02:14 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:39.891 04:02:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:39.891 04:02:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:39.891 04:02:14 -- common/autotest_common.sh@10 -- # set +x 00:18:39.891 ************************************ 00:18:39.891 START TEST nvmf_multiconnection 00:18:39.891 ************************************ 00:18:39.891 04:02:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:40.150 * Looking for test storage... 00:18:40.150 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:40.150 04:02:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:40.150 04:02:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:40.150 04:02:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:40.150 04:02:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:40.150 04:02:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:40.150 04:02:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:40.150 04:02:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:40.150 04:02:15 -- scripts/common.sh@335 -- # IFS=.-: 00:18:40.150 04:02:15 -- scripts/common.sh@335 -- # read -ra ver1 00:18:40.150 04:02:15 -- scripts/common.sh@336 -- # IFS=.-: 00:18:40.150 04:02:15 -- scripts/common.sh@336 -- # read -ra ver2 00:18:40.150 04:02:15 -- scripts/common.sh@337 -- # local 'op=<' 00:18:40.150 04:02:15 -- scripts/common.sh@339 -- # ver1_l=2 00:18:40.150 04:02:15 -- scripts/common.sh@340 -- # ver2_l=1 00:18:40.150 04:02:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:40.150 04:02:15 -- scripts/common.sh@343 -- # case "$op" in 00:18:40.150 04:02:15 -- scripts/common.sh@344 -- # : 1 00:18:40.150 04:02:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:40.150 04:02:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:40.150 04:02:15 -- scripts/common.sh@364 -- # decimal 1 00:18:40.150 04:02:15 -- scripts/common.sh@352 -- # local d=1 00:18:40.150 04:02:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:40.150 04:02:15 -- scripts/common.sh@354 -- # echo 1 00:18:40.150 04:02:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:40.150 04:02:15 -- scripts/common.sh@365 -- # decimal 2 00:18:40.150 04:02:15 -- scripts/common.sh@352 -- # local d=2 00:18:40.150 04:02:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:40.150 04:02:15 -- scripts/common.sh@354 -- # echo 2 00:18:40.150 04:02:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:40.150 04:02:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:40.150 04:02:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:40.150 04:02:15 -- scripts/common.sh@367 -- # return 0 00:18:40.150 04:02:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:40.150 04:02:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:40.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.150 --rc genhtml_branch_coverage=1 00:18:40.150 --rc genhtml_function_coverage=1 00:18:40.150 --rc genhtml_legend=1 00:18:40.150 --rc geninfo_all_blocks=1 00:18:40.150 --rc geninfo_unexecuted_blocks=1 00:18:40.150 00:18:40.150 ' 00:18:40.150 04:02:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:40.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.150 --rc genhtml_branch_coverage=1 00:18:40.150 --rc genhtml_function_coverage=1 00:18:40.150 --rc genhtml_legend=1 00:18:40.150 --rc geninfo_all_blocks=1 00:18:40.150 --rc geninfo_unexecuted_blocks=1 00:18:40.150 00:18:40.150 ' 00:18:40.150 04:02:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:40.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.150 --rc genhtml_branch_coverage=1 00:18:40.150 --rc genhtml_function_coverage=1 00:18:40.150 --rc genhtml_legend=1 00:18:40.150 --rc geninfo_all_blocks=1 00:18:40.150 --rc geninfo_unexecuted_blocks=1 00:18:40.150 00:18:40.150 ' 00:18:40.150 04:02:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:40.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.150 --rc genhtml_branch_coverage=1 00:18:40.150 --rc genhtml_function_coverage=1 00:18:40.150 --rc genhtml_legend=1 00:18:40.150 --rc geninfo_all_blocks=1 00:18:40.150 --rc geninfo_unexecuted_blocks=1 00:18:40.150 00:18:40.150 ' 00:18:40.150 04:02:15 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:40.150 04:02:15 -- nvmf/common.sh@7 -- # uname -s 00:18:40.150 04:02:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:40.150 04:02:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:40.150 04:02:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:40.150 04:02:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:40.150 04:02:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:40.150 04:02:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:40.150 04:02:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:40.150 04:02:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:40.150 04:02:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:40.150 04:02:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:40.150 04:02:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:18:40.150 04:02:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:18:40.150 04:02:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:40.150 04:02:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:40.150 04:02:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:40.150 04:02:15 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:40.150 04:02:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:40.150 04:02:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:40.150 04:02:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:40.150 04:02:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.150 04:02:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.150 04:02:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.150 04:02:15 -- paths/export.sh@5 -- # export PATH 00:18:40.151 04:02:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.151 04:02:15 -- nvmf/common.sh@46 -- # : 0 00:18:40.151 04:02:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:40.151 04:02:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:40.151 04:02:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:40.151 04:02:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:40.151 04:02:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:40.151 04:02:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:40.151 04:02:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:40.151 04:02:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:40.151 04:02:15 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:40.151 04:02:15 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:40.151 04:02:15 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:18:40.151 04:02:15 -- target/multiconnection.sh@16 -- # nvmftestinit 00:18:40.151 04:02:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:40.151 04:02:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:40.151 04:02:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:40.151 04:02:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:40.151 04:02:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:40.151 04:02:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.151 04:02:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:40.151 04:02:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.151 04:02:15 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:40.151 04:02:15 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:40.151 04:02:15 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:40.151 04:02:15 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:40.151 04:02:15 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:40.151 04:02:15 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:40.151 04:02:15 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:40.151 04:02:15 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:40.151 04:02:15 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:40.151 04:02:15 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:40.151 04:02:15 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:40.151 04:02:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:40.151 04:02:15 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:40.151 04:02:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:40.151 04:02:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:40.151 04:02:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:40.151 04:02:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:40.151 04:02:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:40.151 04:02:15 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:40.151 04:02:15 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:40.151 Cannot find device "nvmf_tgt_br" 00:18:40.151 04:02:15 -- nvmf/common.sh@154 -- # true 00:18:40.151 04:02:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:40.151 Cannot find device "nvmf_tgt_br2" 00:18:40.151 04:02:15 -- nvmf/common.sh@155 -- # true 00:18:40.151 04:02:15 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:40.151 04:02:15 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:40.151 Cannot find device "nvmf_tgt_br" 00:18:40.151 04:02:15 -- nvmf/common.sh@157 -- # true 00:18:40.151 04:02:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:40.151 Cannot find device "nvmf_tgt_br2" 00:18:40.151 04:02:15 -- nvmf/common.sh@158 -- # true 00:18:40.151 04:02:15 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:40.151 04:02:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:40.409 04:02:15 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:40.409 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:40.409 04:02:15 -- nvmf/common.sh@161 -- # true 00:18:40.409 04:02:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:40.409 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:40.409 04:02:15 -- nvmf/common.sh@162 -- # true 00:18:40.409 04:02:15 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:40.409 04:02:15 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:40.409 04:02:15 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:40.409 04:02:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:40.409 04:02:15 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:40.409 04:02:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:40.409 04:02:15 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:40.409 04:02:15 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:40.409 04:02:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:40.409 04:02:15 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:40.409 04:02:15 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:40.409 04:02:15 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:40.409 04:02:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:40.409 04:02:15 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:40.409 04:02:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:40.409 04:02:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:40.409 04:02:15 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:40.409 04:02:15 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:40.409 04:02:15 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:40.409 04:02:15 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:40.409 04:02:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:40.409 04:02:15 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:40.409 04:02:15 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:40.409 04:02:15 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:40.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:40.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:18:40.409 00:18:40.409 --- 10.0.0.2 ping statistics --- 00:18:40.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.409 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:18:40.409 04:02:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:40.410 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:40.410 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:18:40.410 00:18:40.410 --- 10.0.0.3 ping statistics --- 00:18:40.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.410 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:18:40.410 04:02:15 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:40.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:40.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:18:40.410 00:18:40.410 --- 10.0.0.1 ping statistics --- 00:18:40.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.410 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:18:40.410 04:02:15 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:40.410 04:02:15 -- nvmf/common.sh@421 -- # return 0 00:18:40.410 04:02:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:40.410 04:02:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:40.410 04:02:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:40.410 04:02:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:40.410 04:02:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:40.410 04:02:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:40.410 04:02:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:40.410 04:02:15 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:18:40.410 04:02:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:40.410 04:02:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:40.410 04:02:15 -- common/autotest_common.sh@10 -- # set +x 00:18:40.410 04:02:15 -- nvmf/common.sh@469 -- # nvmfpid=80050 00:18:40.410 04:02:15 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:40.410 04:02:15 -- nvmf/common.sh@470 -- # waitforlisten 80050 00:18:40.410 04:02:15 -- common/autotest_common.sh@829 -- # '[' -z 80050 ']' 00:18:40.410 04:02:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.410 04:02:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:40.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.410 04:02:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.410 04:02:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:40.410 04:02:15 -- common/autotest_common.sh@10 -- # set +x 00:18:40.668 [2024-11-08 04:02:15.545074] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:40.668 [2024-11-08 04:02:15.545157] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:40.668 [2024-11-08 04:02:15.686719] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:40.668 [2024-11-08 04:02:15.769845] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:40.668 [2024-11-08 04:02:15.770045] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:40.668 [2024-11-08 04:02:15.770056] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:40.668 [2024-11-08 04:02:15.770065] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:40.668 [2024-11-08 04:02:15.770162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:40.669 [2024-11-08 04:02:15.770782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:40.669 [2024-11-08 04:02:15.771049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:40.669 [2024-11-08 04:02:15.771053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.605 04:02:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:41.605 04:02:16 -- common/autotest_common.sh@862 -- # return 0 00:18:41.605 04:02:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:41.605 04:02:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:41.605 04:02:16 -- common/autotest_common.sh@10 -- # set +x 00:18:41.605 04:02:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:41.605 04:02:16 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:41.605 04:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.605 04:02:16 -- common/autotest_common.sh@10 -- # set +x 00:18:41.605 [2024-11-08 04:02:16.621023] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:41.605 04:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.605 04:02:16 -- target/multiconnection.sh@21 -- # seq 1 11 00:18:41.605 04:02:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:41.605 04:02:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:41.605 04:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.605 04:02:16 -- common/autotest_common.sh@10 -- # set +x 00:18:41.605 Malloc1 00:18:41.605 04:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.605 04:02:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:18:41.605 04:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.605 04:02:16 -- common/autotest_common.sh@10 -- # set +x 00:18:41.605 04:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.605 04:02:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:41.605 04:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.605 04:02:16 -- common/autotest_common.sh@10 -- # set +x 00:18:41.605 04:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.605 04:02:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:41.605 04:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.605 04:02:16 -- common/autotest_common.sh@10 -- # set +x 00:18:41.605 [2024-11-08 04:02:16.698119] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:41.605 04:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.605 04:02:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:41.605 04:02:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:18:41.605 04:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.605 04:02:16 -- common/autotest_common.sh@10 -- # set +x 00:18:41.864 Malloc2 00:18:41.864 04:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.864 04:02:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:41.864 04:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.864 04:02:16 -- common/autotest_common.sh@10 -- # set +x 00:18:41.864 04:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.864 04:02:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:18:41.864 04:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.864 04:02:16 -- common/autotest_common.sh@10 -- # set +x 00:18:41.864 04:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.864 04:02:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:41.864 04:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.864 04:02:16 -- common/autotest_common.sh@10 -- # set +x 00:18:41.864 04:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.864 04:02:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:41.864 04:02:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:18:41.864 04:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.864 04:02:16 -- common/autotest_common.sh@10 -- # set +x 00:18:41.864 Malloc3 00:18:41.864 04:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.864 04:02:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:18:41.864 04:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.864 04:02:16 -- common/autotest_common.sh@10 -- # set +x 00:18:41.864 04:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.864 04:02:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:18:41.864 04:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.864 04:02:16 -- common/autotest_common.sh@10 -- # set +x 00:18:41.864 04:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.864 04:02:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:18:41.864 04:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.864 04:02:16 -- common/autotest_common.sh@10 -- # set +x 00:18:41.864 04:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.864 04:02:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:41.864 04:02:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:18:41.864 04:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.864 04:02:16 -- common/autotest_common.sh@10 -- # set +x 00:18:41.864 Malloc4 00:18:41.864 04:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.864 04:02:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:18:41.864 04:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.864 04:02:16 -- common/autotest_common.sh@10 -- # set +x 00:18:41.864 04:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.864 04:02:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:18:41.864 04:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.864 04:02:16 -- common/autotest_common.sh@10 -- # set +x 00:18:41.864 04:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.864 04:02:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:18:41.864 04:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.864 04:02:16 -- common/autotest_common.sh@10 -- # set +x 00:18:41.864 04:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.864 04:02:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:41.864 04:02:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:18:41.864 04:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.864 04:02:16 -- common/autotest_common.sh@10 -- # set +x 00:18:41.864 Malloc5 00:18:41.864 04:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.864 04:02:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:18:41.864 04:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.864 04:02:16 -- common/autotest_common.sh@10 -- # set +x 00:18:41.864 04:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.864 04:02:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:18:41.864 04:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.864 04:02:16 -- common/autotest_common.sh@10 -- # set +x 00:18:41.864 04:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.865 04:02:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:18:41.865 04:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.865 04:02:16 -- common/autotest_common.sh@10 -- # set +x 00:18:41.865 04:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.865 04:02:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:41.865 04:02:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:18:41.865 04:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.865 04:02:16 -- common/autotest_common.sh@10 -- # set +x 00:18:41.865 Malloc6 00:18:41.865 04:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.865 04:02:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:18:41.865 04:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.865 04:02:16 -- common/autotest_common.sh@10 -- # set +x 00:18:41.865 04:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.865 04:02:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:18:41.865 04:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.865 04:02:16 -- common/autotest_common.sh@10 -- # set +x 00:18:41.865 04:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.865 04:02:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:18:41.865 04:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.865 04:02:16 -- common/autotest_common.sh@10 -- # set +x 00:18:41.865 04:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.865 04:02:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:41.865 04:02:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:18:41.865 04:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.865 04:02:16 -- common/autotest_common.sh@10 -- # set +x 00:18:41.865 Malloc7 00:18:41.865 04:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.865 04:02:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:18:41.865 04:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.865 04:02:16 -- common/autotest_common.sh@10 -- # set +x 00:18:41.865 04:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.865 04:02:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:18:41.865 04:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.865 04:02:16 -- common/autotest_common.sh@10 -- # set +x 00:18:42.124 04:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.124 04:02:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:18:42.124 04:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.124 04:02:16 -- common/autotest_common.sh@10 -- # set +x 00:18:42.124 04:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.124 04:02:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:42.124 04:02:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:18:42.124 04:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.124 04:02:16 -- common/autotest_common.sh@10 -- # set +x 00:18:42.124 Malloc8 00:18:42.124 04:02:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.124 04:02:17 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:18:42.124 04:02:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.124 04:02:17 -- common/autotest_common.sh@10 -- # set +x 00:18:42.124 04:02:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.124 04:02:17 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:18:42.124 04:02:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.124 04:02:17 -- common/autotest_common.sh@10 -- # set +x 00:18:42.124 04:02:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.124 04:02:17 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:18:42.124 04:02:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.124 04:02:17 -- common/autotest_common.sh@10 -- # set +x 00:18:42.124 04:02:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.124 04:02:17 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:42.124 04:02:17 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:18:42.124 04:02:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.124 04:02:17 -- common/autotest_common.sh@10 -- # set +x 00:18:42.124 Malloc9 00:18:42.124 04:02:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.124 04:02:17 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:18:42.124 04:02:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.124 04:02:17 -- common/autotest_common.sh@10 -- # set +x 00:18:42.124 04:02:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.124 04:02:17 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:18:42.124 04:02:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.124 04:02:17 -- common/autotest_common.sh@10 -- # set +x 00:18:42.124 04:02:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.124 04:02:17 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:18:42.124 04:02:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.124 04:02:17 -- common/autotest_common.sh@10 -- # set +x 00:18:42.124 04:02:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.124 04:02:17 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:42.124 04:02:17 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:18:42.124 04:02:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.124 04:02:17 -- common/autotest_common.sh@10 -- # set +x 00:18:42.124 Malloc10 00:18:42.124 04:02:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.124 04:02:17 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:18:42.124 04:02:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.124 04:02:17 -- common/autotest_common.sh@10 -- # set +x 00:18:42.124 04:02:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.124 04:02:17 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:18:42.124 04:02:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.124 04:02:17 -- common/autotest_common.sh@10 -- # set +x 00:18:42.124 04:02:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.124 04:02:17 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:18:42.124 04:02:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.124 04:02:17 -- common/autotest_common.sh@10 -- # set +x 00:18:42.124 04:02:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.124 04:02:17 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:42.124 04:02:17 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:18:42.124 04:02:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.124 04:02:17 -- common/autotest_common.sh@10 -- # set +x 00:18:42.124 Malloc11 00:18:42.124 04:02:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.124 04:02:17 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:18:42.124 04:02:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.124 04:02:17 -- common/autotest_common.sh@10 -- # set +x 00:18:42.124 04:02:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.124 04:02:17 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:18:42.124 04:02:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.124 04:02:17 -- common/autotest_common.sh@10 -- # set +x 00:18:42.124 04:02:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.124 04:02:17 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:18:42.124 04:02:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.124 04:02:17 -- common/autotest_common.sh@10 -- # set +x 00:18:42.124 04:02:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.124 04:02:17 -- target/multiconnection.sh@28 -- # seq 1 11 00:18:42.124 04:02:17 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:42.124 04:02:17 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:42.383 04:02:17 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:18:42.383 04:02:17 -- common/autotest_common.sh@1187 -- # local i=0 00:18:42.383 04:02:17 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:42.383 04:02:17 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:42.383 04:02:17 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:44.287 04:02:19 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:44.287 04:02:19 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:44.287 04:02:19 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:18:44.287 04:02:19 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:44.287 04:02:19 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:44.287 04:02:19 -- common/autotest_common.sh@1197 -- # return 0 00:18:44.287 04:02:19 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:44.287 04:02:19 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:18:44.546 04:02:19 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:18:44.546 04:02:19 -- common/autotest_common.sh@1187 -- # local i=0 00:18:44.546 04:02:19 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:44.546 04:02:19 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:44.546 04:02:19 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:47.076 04:02:21 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:47.076 04:02:21 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:47.076 04:02:21 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:18:47.076 04:02:21 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:47.076 04:02:21 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:47.076 04:02:21 -- common/autotest_common.sh@1197 -- # return 0 00:18:47.076 04:02:21 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:47.076 04:02:21 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:18:47.076 04:02:21 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:18:47.076 04:02:21 -- common/autotest_common.sh@1187 -- # local i=0 00:18:47.076 04:02:21 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:47.076 04:02:21 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:47.076 04:02:21 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:49.029 04:02:23 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:49.029 04:02:23 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:49.029 04:02:23 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:18:49.029 04:02:23 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:49.029 04:02:23 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:49.029 04:02:23 -- common/autotest_common.sh@1197 -- # return 0 00:18:49.029 04:02:23 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:49.029 04:02:23 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:18:49.029 04:02:23 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:18:49.029 04:02:23 -- common/autotest_common.sh@1187 -- # local i=0 00:18:49.029 04:02:23 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:49.029 04:02:23 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:49.029 04:02:23 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:50.935 04:02:25 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:50.935 04:02:25 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:50.935 04:02:25 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:18:50.935 04:02:25 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:50.935 04:02:25 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:50.935 04:02:25 -- common/autotest_common.sh@1197 -- # return 0 00:18:50.935 04:02:25 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:50.935 04:02:25 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:18:51.194 04:02:26 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:18:51.194 04:02:26 -- common/autotest_common.sh@1187 -- # local i=0 00:18:51.194 04:02:26 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:51.194 04:02:26 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:51.194 04:02:26 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:53.098 04:02:28 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:53.098 04:02:28 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:53.098 04:02:28 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:18:53.098 04:02:28 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:53.098 04:02:28 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:53.098 04:02:28 -- common/autotest_common.sh@1197 -- # return 0 00:18:53.098 04:02:28 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:53.098 04:02:28 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:18:53.357 04:02:28 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:18:53.357 04:02:28 -- common/autotest_common.sh@1187 -- # local i=0 00:18:53.357 04:02:28 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:53.357 04:02:28 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:53.357 04:02:28 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:55.890 04:02:30 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:55.890 04:02:30 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:55.890 04:02:30 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:18:55.890 04:02:30 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:55.890 04:02:30 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:55.890 04:02:30 -- common/autotest_common.sh@1197 -- # return 0 00:18:55.890 04:02:30 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:55.890 04:02:30 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:18:55.890 04:02:30 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:18:55.890 04:02:30 -- common/autotest_common.sh@1187 -- # local i=0 00:18:55.890 04:02:30 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:55.890 04:02:30 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:55.890 04:02:30 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:57.793 04:02:32 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:57.793 04:02:32 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:57.793 04:02:32 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:18:57.793 04:02:32 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:57.793 04:02:32 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:57.793 04:02:32 -- common/autotest_common.sh@1197 -- # return 0 00:18:57.793 04:02:32 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:57.793 04:02:32 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:18:57.793 04:02:32 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:18:57.793 04:02:32 -- common/autotest_common.sh@1187 -- # local i=0 00:18:57.793 04:02:32 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:57.793 04:02:32 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:57.793 04:02:32 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:59.697 04:02:34 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:59.697 04:02:34 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:59.697 04:02:34 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:18:59.698 04:02:34 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:59.698 04:02:34 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:59.698 04:02:34 -- common/autotest_common.sh@1197 -- # return 0 00:18:59.698 04:02:34 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:59.698 04:02:34 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:18:59.957 04:02:34 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:18:59.957 04:02:34 -- common/autotest_common.sh@1187 -- # local i=0 00:18:59.957 04:02:34 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:59.957 04:02:34 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:59.957 04:02:34 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:02.491 04:02:36 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:02.491 04:02:36 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:02.491 04:02:36 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:19:02.491 04:02:37 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:02.491 04:02:37 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:02.491 04:02:37 -- common/autotest_common.sh@1197 -- # return 0 00:19:02.491 04:02:37 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:02.491 04:02:37 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:19:02.491 04:02:37 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:19:02.491 04:02:37 -- common/autotest_common.sh@1187 -- # local i=0 00:19:02.491 04:02:37 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:02.491 04:02:37 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:02.491 04:02:37 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:04.394 04:02:39 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:04.394 04:02:39 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:04.394 04:02:39 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:19:04.394 04:02:39 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:04.394 04:02:39 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:04.394 04:02:39 -- common/autotest_common.sh@1197 -- # return 0 00:19:04.394 04:02:39 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:04.395 04:02:39 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:19:04.395 04:02:39 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:19:04.395 04:02:39 -- common/autotest_common.sh@1187 -- # local i=0 00:19:04.395 04:02:39 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:04.395 04:02:39 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:04.395 04:02:39 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:06.926 04:02:41 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:06.926 04:02:41 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:06.926 04:02:41 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:19:06.926 04:02:41 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:06.926 04:02:41 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:06.926 04:02:41 -- common/autotest_common.sh@1197 -- # return 0 00:19:06.926 04:02:41 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:19:06.926 [global] 00:19:06.926 thread=1 00:19:06.926 invalidate=1 00:19:06.926 rw=read 00:19:06.926 time_based=1 00:19:06.926 runtime=10 00:19:06.926 ioengine=libaio 00:19:06.926 direct=1 00:19:06.926 bs=262144 00:19:06.926 iodepth=64 00:19:06.926 norandommap=1 00:19:06.926 numjobs=1 00:19:06.926 00:19:06.926 [job0] 00:19:06.926 filename=/dev/nvme0n1 00:19:06.926 [job1] 00:19:06.926 filename=/dev/nvme10n1 00:19:06.926 [job2] 00:19:06.926 filename=/dev/nvme1n1 00:19:06.926 [job3] 00:19:06.926 filename=/dev/nvme2n1 00:19:06.926 [job4] 00:19:06.926 filename=/dev/nvme3n1 00:19:06.926 [job5] 00:19:06.926 filename=/dev/nvme4n1 00:19:06.926 [job6] 00:19:06.926 filename=/dev/nvme5n1 00:19:06.926 [job7] 00:19:06.926 filename=/dev/nvme6n1 00:19:06.926 [job8] 00:19:06.926 filename=/dev/nvme7n1 00:19:06.926 [job9] 00:19:06.926 filename=/dev/nvme8n1 00:19:06.926 [job10] 00:19:06.926 filename=/dev/nvme9n1 00:19:06.926 Could not set queue depth (nvme0n1) 00:19:06.926 Could not set queue depth (nvme10n1) 00:19:06.926 Could not set queue depth (nvme1n1) 00:19:06.926 Could not set queue depth (nvme2n1) 00:19:06.926 Could not set queue depth (nvme3n1) 00:19:06.926 Could not set queue depth (nvme4n1) 00:19:06.926 Could not set queue depth (nvme5n1) 00:19:06.926 Could not set queue depth (nvme6n1) 00:19:06.926 Could not set queue depth (nvme7n1) 00:19:06.926 Could not set queue depth (nvme8n1) 00:19:06.926 Could not set queue depth (nvme9n1) 00:19:06.926 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:06.926 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:06.926 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:06.926 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:06.926 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:06.926 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:06.926 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:06.926 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:06.926 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:06.926 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:06.926 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:06.926 fio-3.35 00:19:06.926 Starting 11 threads 00:19:19.159 00:19:19.159 job0: (groupid=0, jobs=1): err= 0: pid=80529: Fri Nov 8 04:02:52 2024 00:19:19.159 read: IOPS=492, BW=123MiB/s (129MB/s)(1242MiB/10094msec) 00:19:19.159 slat (usec): min=10, max=118486, avg=1965.55, stdev=7254.78 00:19:19.159 clat (msec): min=18, max=234, avg=127.81, stdev=23.40 00:19:19.159 lat (msec): min=20, max=242, avg=129.78, stdev=24.48 00:19:19.159 clat percentiles (msec): 00:19:19.159 | 1.00th=[ 40], 5.00th=[ 79], 10.00th=[ 111], 20.00th=[ 120], 00:19:19.159 | 30.00th=[ 124], 40.00th=[ 127], 50.00th=[ 131], 60.00th=[ 134], 00:19:19.159 | 70.00th=[ 138], 80.00th=[ 140], 90.00th=[ 148], 95.00th=[ 155], 00:19:19.159 | 99.00th=[ 199], 99.50th=[ 213], 99.90th=[ 222], 99.95th=[ 222], 00:19:19.159 | 99.99th=[ 234] 00:19:19.159 bw ( KiB/s): min=107008, max=186880, per=8.27%, avg=125657.20, stdev=15601.62, samples=20 00:19:19.159 iops : min= 418, max= 730, avg=490.60, stdev=61.02, samples=20 00:19:19.159 lat (msec) : 20=0.02%, 50=1.75%, 100=4.83%, 250=93.40% 00:19:19.159 cpu : usr=0.20%, sys=1.73%, ctx=827, majf=0, minf=4097 00:19:19.159 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:19:19.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.159 issued rwts: total=4969,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.159 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.159 job1: (groupid=0, jobs=1): err= 0: pid=80531: Fri Nov 8 04:02:52 2024 00:19:19.159 read: IOPS=473, BW=118MiB/s (124MB/s)(1193MiB/10084msec) 00:19:19.159 slat (usec): min=21, max=70069, avg=2093.43, stdev=7112.04 00:19:19.159 clat (msec): min=30, max=212, avg=132.88, stdev=21.56 00:19:19.159 lat (msec): min=30, max=234, avg=134.98, stdev=22.76 00:19:19.159 clat percentiles (msec): 00:19:19.159 | 1.00th=[ 49], 5.00th=[ 90], 10.00th=[ 112], 20.00th=[ 124], 00:19:19.159 | 30.00th=[ 127], 40.00th=[ 132], 50.00th=[ 136], 60.00th=[ 140], 00:19:19.159 | 70.00th=[ 142], 80.00th=[ 148], 90.00th=[ 155], 95.00th=[ 161], 00:19:19.159 | 99.00th=[ 178], 99.50th=[ 188], 99.90th=[ 213], 99.95th=[ 213], 00:19:19.159 | 99.99th=[ 213] 00:19:19.159 bw ( KiB/s): min=104960, max=175265, per=7.91%, avg=120232.60, stdev=14427.58, samples=20 00:19:19.159 iops : min= 410, max= 684, avg=470.30, stdev=55.85, samples=20 00:19:19.159 lat (msec) : 50=1.26%, 100=4.55%, 250=94.19% 00:19:19.159 cpu : usr=0.15%, sys=1.90%, ctx=817, majf=0, minf=4097 00:19:19.159 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:19:19.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.159 issued rwts: total=4771,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.159 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.159 job2: (groupid=0, jobs=1): err= 0: pid=80535: Fri Nov 8 04:02:52 2024 00:19:19.159 read: IOPS=323, BW=81.0MiB/s (84.9MB/s)(823MiB/10155msec) 00:19:19.159 slat (usec): min=21, max=132084, avg=3037.21, stdev=11919.32 00:19:19.159 clat (msec): min=33, max=314, avg=194.05, stdev=24.74 00:19:19.159 lat (msec): min=35, max=314, avg=197.09, stdev=27.27 00:19:19.159 clat percentiles (msec): 00:19:19.159 | 1.00th=[ 66], 5.00th=[ 171], 10.00th=[ 178], 20.00th=[ 184], 00:19:19.159 | 30.00th=[ 188], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 199], 00:19:19.159 | 70.00th=[ 201], 80.00th=[ 207], 90.00th=[ 215], 95.00th=[ 224], 00:19:19.159 | 99.00th=[ 271], 99.50th=[ 288], 99.90th=[ 313], 99.95th=[ 313], 00:19:19.159 | 99.99th=[ 313] 00:19:19.159 bw ( KiB/s): min=69120, max=93508, per=5.44%, avg=82626.95, stdev=8392.26, samples=20 00:19:19.159 iops : min= 270, max= 365, avg=322.40, stdev=32.76, samples=20 00:19:19.159 lat (msec) : 50=0.61%, 100=0.46%, 250=96.50%, 500=2.43% 00:19:19.159 cpu : usr=0.12%, sys=1.26%, ctx=605, majf=0, minf=4097 00:19:19.159 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:19:19.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.159 issued rwts: total=3290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.159 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.159 job3: (groupid=0, jobs=1): err= 0: pid=80536: Fri Nov 8 04:02:52 2024 00:19:19.159 read: IOPS=323, BW=80.8MiB/s (84.7MB/s)(820MiB/10148msec) 00:19:19.159 slat (usec): min=20, max=149345, avg=3047.33, stdev=14633.72 00:19:19.159 clat (msec): min=49, max=348, avg=194.57, stdev=22.19 00:19:19.159 lat (msec): min=50, max=362, avg=197.61, stdev=26.27 00:19:19.159 clat percentiles (msec): 00:19:19.159 | 1.00th=[ 142], 5.00th=[ 169], 10.00th=[ 178], 20.00th=[ 184], 00:19:19.159 | 30.00th=[ 188], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 197], 00:19:19.159 | 70.00th=[ 199], 80.00th=[ 203], 90.00th=[ 209], 95.00th=[ 218], 00:19:19.160 | 99.00th=[ 309], 99.50th=[ 321], 99.90th=[ 347], 99.95th=[ 347], 00:19:19.160 | 99.99th=[ 351] 00:19:19.160 bw ( KiB/s): min=50789, max=96768, per=5.42%, avg=82302.80, stdev=12410.97, samples=20 00:19:19.160 iops : min= 198, max= 378, avg=321.40, stdev=48.56, samples=20 00:19:19.160 lat (msec) : 50=0.03%, 100=0.09%, 250=97.38%, 500=2.50% 00:19:19.160 cpu : usr=0.11%, sys=1.23%, ctx=535, majf=0, minf=4097 00:19:19.160 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:19:19.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.160 issued rwts: total=3279,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.160 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.160 job4: (groupid=0, jobs=1): err= 0: pid=80537: Fri Nov 8 04:02:52 2024 00:19:19.160 read: IOPS=317, BW=79.4MiB/s (83.3MB/s)(805MiB/10142msec) 00:19:19.160 slat (usec): min=22, max=111913, avg=3099.76, stdev=11136.55 00:19:19.160 clat (msec): min=134, max=321, avg=197.96, stdev=16.92 00:19:19.160 lat (msec): min=134, max=321, avg=201.06, stdev=19.80 00:19:19.160 clat percentiles (msec): 00:19:19.160 | 1.00th=[ 163], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 188], 00:19:19.160 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 197], 60.00th=[ 201], 00:19:19.160 | 70.00th=[ 205], 80.00th=[ 207], 90.00th=[ 215], 95.00th=[ 222], 00:19:19.160 | 99.00th=[ 264], 99.50th=[ 288], 99.90th=[ 321], 99.95th=[ 321], 00:19:19.160 | 99.99th=[ 321] 00:19:19.160 bw ( KiB/s): min=64641, max=95744, per=5.32%, avg=80784.50, stdev=8748.45, samples=20 00:19:19.160 iops : min= 252, max= 374, avg=315.40, stdev=34.19, samples=20 00:19:19.160 lat (msec) : 250=98.67%, 500=1.33% 00:19:19.160 cpu : usr=0.12%, sys=1.36%, ctx=702, majf=0, minf=4097 00:19:19.160 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:19:19.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.160 issued rwts: total=3221,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.160 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.160 job5: (groupid=0, jobs=1): err= 0: pid=80538: Fri Nov 8 04:02:52 2024 00:19:19.160 read: IOPS=500, BW=125MiB/s (131MB/s)(1262MiB/10091msec) 00:19:19.160 slat (usec): min=21, max=105155, avg=1966.81, stdev=8198.79 00:19:19.160 clat (msec): min=18, max=217, avg=125.71, stdev=31.00 00:19:19.160 lat (msec): min=18, max=247, avg=127.68, stdev=32.21 00:19:19.160 clat percentiles (msec): 00:19:19.160 | 1.00th=[ 26], 5.00th=[ 40], 10.00th=[ 106], 20.00th=[ 120], 00:19:19.160 | 30.00th=[ 126], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 136], 00:19:19.160 | 70.00th=[ 138], 80.00th=[ 142], 90.00th=[ 150], 95.00th=[ 157], 00:19:19.160 | 99.00th=[ 190], 99.50th=[ 199], 99.90th=[ 215], 99.95th=[ 215], 00:19:19.160 | 99.99th=[ 218] 00:19:19.160 bw ( KiB/s): min=104239, max=273884, per=8.40%, avg=127540.65, stdev=35107.19, samples=20 00:19:19.160 iops : min= 407, max= 1069, avg=498.15, stdev=136.96, samples=20 00:19:19.160 lat (msec) : 20=0.26%, 50=7.47%, 100=2.02%, 250=90.26% 00:19:19.160 cpu : usr=0.17%, sys=1.69%, ctx=750, majf=0, minf=4097 00:19:19.160 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:19.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.160 issued rwts: total=5049,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.160 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.160 job6: (groupid=0, jobs=1): err= 0: pid=80539: Fri Nov 8 04:02:52 2024 00:19:19.160 read: IOPS=315, BW=78.8MiB/s (82.6MB/s)(800MiB/10154msec) 00:19:19.160 slat (usec): min=21, max=135482, avg=3099.59, stdev=10705.41 00:19:19.160 clat (msec): min=30, max=373, avg=199.57, stdev=23.47 00:19:19.160 lat (msec): min=32, max=373, avg=202.67, stdev=25.43 00:19:19.160 clat percentiles (msec): 00:19:19.160 | 1.00th=[ 102], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:19:19.160 | 30.00th=[ 192], 40.00th=[ 197], 50.00th=[ 199], 60.00th=[ 203], 00:19:19.160 | 70.00th=[ 207], 80.00th=[ 211], 90.00th=[ 220], 95.00th=[ 228], 00:19:19.160 | 99.00th=[ 279], 99.50th=[ 305], 99.90th=[ 330], 99.95th=[ 330], 00:19:19.160 | 99.99th=[ 376] 00:19:19.160 bw ( KiB/s): min=71536, max=92160, per=5.29%, avg=80321.40, stdev=5702.90, samples=20 00:19:19.160 iops : min= 279, max= 360, avg=313.40, stdev=22.28, samples=20 00:19:19.160 lat (msec) : 50=0.38%, 100=0.50%, 250=97.09%, 500=2.03% 00:19:19.160 cpu : usr=0.12%, sys=1.17%, ctx=606, majf=0, minf=4097 00:19:19.160 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:19:19.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.160 issued rwts: total=3199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.160 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.160 job7: (groupid=0, jobs=1): err= 0: pid=80540: Fri Nov 8 04:02:52 2024 00:19:19.160 read: IOPS=313, BW=78.4MiB/s (82.2MB/s)(796MiB/10146msec) 00:19:19.160 slat (usec): min=21, max=119201, avg=3137.19, stdev=10604.07 00:19:19.160 clat (msec): min=51, max=360, avg=200.51, stdev=22.80 00:19:19.160 lat (msec): min=53, max=360, avg=203.65, stdev=25.01 00:19:19.160 clat percentiles (msec): 00:19:19.160 | 1.00th=[ 144], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:19:19.160 | 30.00th=[ 194], 40.00th=[ 197], 50.00th=[ 201], 60.00th=[ 203], 00:19:19.160 | 70.00th=[ 207], 80.00th=[ 213], 90.00th=[ 222], 95.00th=[ 228], 00:19:19.160 | 99.00th=[ 264], 99.50th=[ 317], 99.90th=[ 359], 99.95th=[ 359], 00:19:19.160 | 99.99th=[ 359] 00:19:19.160 bw ( KiB/s): min=65536, max=92672, per=5.26%, avg=79830.00, stdev=7331.43, samples=20 00:19:19.160 iops : min= 256, max= 362, avg=311.80, stdev=28.63, samples=20 00:19:19.160 lat (msec) : 100=0.82%, 250=96.76%, 500=2.42% 00:19:19.160 cpu : usr=0.11%, sys=1.30%, ctx=587, majf=0, minf=4097 00:19:19.160 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:19:19.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.160 issued rwts: total=3182,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.160 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.160 job8: (groupid=0, jobs=1): err= 0: pid=80541: Fri Nov 8 04:02:52 2024 00:19:19.160 read: IOPS=2148, BW=537MiB/s (563MB/s)(5384MiB/10022msec) 00:19:19.160 slat (usec): min=19, max=27415, avg=453.17, stdev=2093.92 00:19:19.160 clat (msec): min=8, max=198, avg=29.28, stdev= 8.75 00:19:19.160 lat (msec): min=8, max=198, avg=29.73, stdev= 8.90 00:19:19.160 clat percentiles (msec): 00:19:19.160 | 1.00th=[ 14], 5.00th=[ 22], 10.00th=[ 24], 20.00th=[ 26], 00:19:19.160 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 28], 60.00th=[ 29], 00:19:19.160 | 70.00th=[ 30], 80.00th=[ 35], 90.00th=[ 39], 95.00th=[ 41], 00:19:19.160 | 99.00th=[ 46], 99.50th=[ 49], 99.90th=[ 188], 99.95th=[ 190], 00:19:19.160 | 99.99th=[ 199] 00:19:19.160 bw ( KiB/s): min=474624, max=588288, per=36.16%, avg=549352.35, stdev=25661.62, samples=20 00:19:19.160 iops : min= 1854, max= 2298, avg=2145.70, stdev=100.27, samples=20 00:19:19.160 lat (msec) : 10=0.09%, 20=2.98%, 50=96.70%, 100=0.06%, 250=0.18% 00:19:19.160 cpu : usr=0.72%, sys=5.93%, ctx=5350, majf=0, minf=4097 00:19:19.160 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:19:19.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.160 issued rwts: total=21535,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.160 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.160 job9: (groupid=0, jobs=1): err= 0: pid=80542: Fri Nov 8 04:02:52 2024 00:19:19.160 read: IOPS=457, BW=114MiB/s (120MB/s)(1154MiB/10092msec) 00:19:19.160 slat (usec): min=20, max=65735, avg=2139.83, stdev=7269.45 00:19:19.160 clat (msec): min=10, max=284, avg=137.63, stdev=25.97 00:19:19.160 lat (msec): min=12, max=284, avg=139.77, stdev=26.82 00:19:19.160 clat percentiles (msec): 00:19:19.160 | 1.00th=[ 59], 5.00th=[ 87], 10.00th=[ 117], 20.00th=[ 126], 00:19:19.160 | 30.00th=[ 131], 40.00th=[ 136], 50.00th=[ 138], 60.00th=[ 142], 00:19:19.160 | 70.00th=[ 146], 80.00th=[ 153], 90.00th=[ 161], 95.00th=[ 176], 00:19:19.160 | 99.00th=[ 224], 99.50th=[ 253], 99.90th=[ 275], 99.95th=[ 284], 00:19:19.160 | 99.99th=[ 284] 00:19:19.160 bw ( KiB/s): min=90624, max=173732, per=7.67%, avg=116441.65, stdev=16626.87, samples=20 00:19:19.160 iops : min= 354, max= 678, avg=454.80, stdev=64.84, samples=20 00:19:19.160 lat (msec) : 20=0.13%, 50=0.78%, 100=4.79%, 250=93.76%, 500=0.54% 00:19:19.160 cpu : usr=0.27%, sys=1.59%, ctx=768, majf=0, minf=4098 00:19:19.160 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:19:19.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.160 issued rwts: total=4614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.160 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.160 job10: (groupid=0, jobs=1): err= 0: pid=80543: Fri Nov 8 04:02:52 2024 00:19:19.160 read: IOPS=310, BW=77.6MiB/s (81.4MB/s)(787MiB/10145msec) 00:19:19.160 slat (usec): min=21, max=164677, avg=3180.63, stdev=12024.72 00:19:19.160 clat (msec): min=90, max=323, avg=202.53, stdev=23.57 00:19:19.160 lat (msec): min=91, max=360, avg=205.71, stdev=26.15 00:19:19.160 clat percentiles (msec): 00:19:19.160 | 1.00th=[ 97], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 192], 00:19:19.160 | 30.00th=[ 194], 40.00th=[ 199], 50.00th=[ 203], 60.00th=[ 207], 00:19:19.160 | 70.00th=[ 209], 80.00th=[ 215], 90.00th=[ 224], 95.00th=[ 232], 00:19:19.160 | 99.00th=[ 292], 99.50th=[ 292], 99.90th=[ 313], 99.95th=[ 313], 00:19:19.160 | 99.99th=[ 326] 00:19:19.160 bw ( KiB/s): min=71168, max=88064, per=5.20%, avg=78960.00, stdev=4818.70, samples=20 00:19:19.160 iops : min= 278, max= 344, avg=308.40, stdev=18.82, samples=20 00:19:19.160 lat (msec) : 100=1.33%, 250=96.60%, 500=2.06% 00:19:19.160 cpu : usr=0.06%, sys=1.22%, ctx=546, majf=0, minf=4097 00:19:19.160 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:19:19.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.160 issued rwts: total=3149,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.160 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.161 00:19:19.161 Run status group 0 (all jobs): 00:19:19.161 READ: bw=1483MiB/s (1556MB/s), 77.6MiB/s-537MiB/s (81.4MB/s-563MB/s), io=14.7GiB (15.8GB), run=10022-10155msec 00:19:19.161 00:19:19.161 Disk stats (read/write): 00:19:19.161 nvme0n1: ios=9856/0, merge=0/0, ticks=1240123/0, in_queue=1240123, util=97.65% 00:19:19.161 nvme10n1: ios=9414/0, merge=0/0, ticks=1236877/0, in_queue=1236877, util=97.67% 00:19:19.161 nvme1n1: ios=6453/0, merge=0/0, ticks=1234026/0, in_queue=1234026, util=97.80% 00:19:19.161 nvme2n1: ios=6431/0, merge=0/0, ticks=1238910/0, in_queue=1238910, util=97.90% 00:19:19.161 nvme3n1: ios=6315/0, merge=0/0, ticks=1235015/0, in_queue=1235015, util=97.94% 00:19:19.161 nvme4n1: ios=9970/0, merge=0/0, ticks=1238634/0, in_queue=1238634, util=98.30% 00:19:19.161 nvme5n1: ios=6275/0, merge=0/0, ticks=1235208/0, in_queue=1235208, util=98.39% 00:19:19.161 nvme6n1: ios=6236/0, merge=0/0, ticks=1236856/0, in_queue=1236856, util=98.31% 00:19:19.161 nvme7n1: ios=42907/0, merge=0/0, ticks=1194649/0, in_queue=1194649, util=98.70% 00:19:19.161 nvme8n1: ios=9100/0, merge=0/0, ticks=1237064/0, in_queue=1237064, util=98.68% 00:19:19.161 nvme9n1: ios=6171/0, merge=0/0, ticks=1231037/0, in_queue=1231037, util=98.81% 00:19:19.161 04:02:52 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:19:19.161 [global] 00:19:19.161 thread=1 00:19:19.161 invalidate=1 00:19:19.161 rw=randwrite 00:19:19.161 time_based=1 00:19:19.161 runtime=10 00:19:19.161 ioengine=libaio 00:19:19.161 direct=1 00:19:19.161 bs=262144 00:19:19.161 iodepth=64 00:19:19.161 norandommap=1 00:19:19.161 numjobs=1 00:19:19.161 00:19:19.161 [job0] 00:19:19.161 filename=/dev/nvme0n1 00:19:19.161 [job1] 00:19:19.161 filename=/dev/nvme10n1 00:19:19.161 [job2] 00:19:19.161 filename=/dev/nvme1n1 00:19:19.161 [job3] 00:19:19.161 filename=/dev/nvme2n1 00:19:19.161 [job4] 00:19:19.161 filename=/dev/nvme3n1 00:19:19.161 [job5] 00:19:19.161 filename=/dev/nvme4n1 00:19:19.161 [job6] 00:19:19.161 filename=/dev/nvme5n1 00:19:19.161 [job7] 00:19:19.161 filename=/dev/nvme6n1 00:19:19.161 [job8] 00:19:19.161 filename=/dev/nvme7n1 00:19:19.161 [job9] 00:19:19.161 filename=/dev/nvme8n1 00:19:19.161 [job10] 00:19:19.161 filename=/dev/nvme9n1 00:19:19.161 Could not set queue depth (nvme0n1) 00:19:19.161 Could not set queue depth (nvme10n1) 00:19:19.161 Could not set queue depth (nvme1n1) 00:19:19.161 Could not set queue depth (nvme2n1) 00:19:19.161 Could not set queue depth (nvme3n1) 00:19:19.161 Could not set queue depth (nvme4n1) 00:19:19.161 Could not set queue depth (nvme5n1) 00:19:19.161 Could not set queue depth (nvme6n1) 00:19:19.161 Could not set queue depth (nvme7n1) 00:19:19.161 Could not set queue depth (nvme8n1) 00:19:19.161 Could not set queue depth (nvme9n1) 00:19:19.161 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.161 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.161 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.161 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.161 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.161 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.161 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.161 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.161 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.161 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.161 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.161 fio-3.35 00:19:19.161 Starting 11 threads 00:19:29.141 00:19:29.141 job0: (groupid=0, jobs=1): err= 0: pid=80739: Fri Nov 8 04:03:03 2024 00:19:29.141 write: IOPS=603, BW=151MiB/s (158MB/s)(1523MiB/10087msec); 0 zone resets 00:19:29.141 slat (usec): min=19, max=17420, avg=1636.34, stdev=2857.02 00:19:29.141 clat (msec): min=20, max=175, avg=104.30, stdev=20.97 00:19:29.141 lat (msec): min=20, max=175, avg=105.94, stdev=21.11 00:19:29.141 clat percentiles (msec): 00:19:29.141 | 1.00th=[ 81], 5.00th=[ 83], 10.00th=[ 85], 20.00th=[ 87], 00:19:29.141 | 30.00th=[ 88], 40.00th=[ 90], 50.00th=[ 92], 60.00th=[ 112], 00:19:29.141 | 70.00th=[ 126], 80.00th=[ 131], 90.00th=[ 132], 95.00th=[ 133], 00:19:29.141 | 99.00th=[ 138], 99.50th=[ 142], 99.90th=[ 163], 99.95th=[ 169], 00:19:29.141 | 99.99th=[ 176] 00:19:29.141 bw ( KiB/s): min=121074, max=188928, per=13.70%, avg=154332.50, stdev=29359.27, samples=20 00:19:29.141 iops : min= 472, max= 738, avg=602.70, stdev=114.75, samples=20 00:19:29.141 lat (msec) : 50=0.26%, 100=58.60%, 250=41.14% 00:19:29.141 cpu : usr=1.09%, sys=1.78%, ctx=5711, majf=0, minf=1 00:19:29.141 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:29.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.141 issued rwts: total=0,6092,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.141 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.141 job1: (groupid=0, jobs=1): err= 0: pid=80740: Fri Nov 8 04:03:03 2024 00:19:29.141 write: IOPS=368, BW=92.1MiB/s (96.6MB/s)(944MiB/10245msec); 0 zone resets 00:19:29.141 slat (usec): min=18, max=69058, avg=2627.35, stdev=5724.36 00:19:29.141 clat (msec): min=21, max=570, avg=171.02, stdev=104.35 00:19:29.141 lat (msec): min=21, max=570, avg=173.65, stdev=105.78 00:19:29.141 clat percentiles (msec): 00:19:29.141 | 1.00th=[ 75], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 89], 00:19:29.141 | 30.00th=[ 90], 40.00th=[ 91], 50.00th=[ 100], 60.00th=[ 133], 00:19:29.141 | 70.00th=[ 284], 80.00th=[ 300], 90.00th=[ 321], 95.00th=[ 330], 00:19:29.141 | 99.00th=[ 355], 99.50th=[ 464], 99.90th=[ 550], 99.95th=[ 567], 00:19:29.141 | 99.99th=[ 567] 00:19:29.141 bw ( KiB/s): min=47616, max=182419, per=8.43%, avg=94939.00, stdev=55992.06, samples=20 00:19:29.141 iops : min= 186, max= 712, avg=370.70, stdev=218.63, samples=20 00:19:29.141 lat (msec) : 50=0.64%, 100=49.42%, 250=15.90%, 500=33.68%, 750=0.37% 00:19:29.141 cpu : usr=0.90%, sys=1.06%, ctx=4457, majf=0, minf=1 00:19:29.141 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:19:29.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.141 issued rwts: total=0,3774,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.141 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.141 job2: (groupid=0, jobs=1): err= 0: pid=80753: Fri Nov 8 04:03:03 2024 00:19:29.141 write: IOPS=603, BW=151MiB/s (158MB/s)(1522MiB/10087msec); 0 zone resets 00:19:29.141 slat (usec): min=20, max=25125, avg=1637.21, stdev=2852.59 00:19:29.141 clat (msec): min=23, max=175, avg=104.37, stdev=21.06 00:19:29.141 lat (msec): min=23, max=175, avg=106.00, stdev=21.23 00:19:29.141 clat percentiles (msec): 00:19:29.141 | 1.00th=[ 81], 5.00th=[ 83], 10.00th=[ 85], 20.00th=[ 87], 00:19:29.141 | 30.00th=[ 88], 40.00th=[ 90], 50.00th=[ 92], 60.00th=[ 111], 00:19:29.141 | 70.00th=[ 126], 80.00th=[ 131], 90.00th=[ 132], 95.00th=[ 133], 00:19:29.141 | 99.00th=[ 142], 99.50th=[ 146], 99.90th=[ 163], 99.95th=[ 169], 00:19:29.141 | 99.99th=[ 176] 00:19:29.141 bw ( KiB/s): min=118784, max=189952, per=13.68%, avg=154178.65, stdev=29509.10, samples=20 00:19:29.141 iops : min= 464, max= 742, avg=602.20, stdev=115.22, samples=20 00:19:29.141 lat (msec) : 50=0.26%, 100=58.71%, 250=41.03% 00:19:29.141 cpu : usr=1.76%, sys=1.74%, ctx=7421, majf=0, minf=1 00:19:29.141 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:29.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.141 issued rwts: total=0,6088,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.141 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.141 job3: (groupid=0, jobs=1): err= 0: pid=80757: Fri Nov 8 04:03:03 2024 00:19:29.141 write: IOPS=228, BW=57.0MiB/s (59.8MB/s)(585MiB/10250msec); 0 zone resets 00:19:29.141 slat (usec): min=21, max=90802, avg=4277.93, stdev=8423.86 00:19:29.141 clat (msec): min=7, max=575, avg=276.03, stdev=62.99 00:19:29.141 lat (msec): min=7, max=575, avg=280.31, stdev=63.40 00:19:29.141 clat percentiles (msec): 00:19:29.142 | 1.00th=[ 47], 5.00th=[ 192], 10.00th=[ 211], 20.00th=[ 224], 00:19:29.142 | 30.00th=[ 251], 40.00th=[ 262], 50.00th=[ 284], 60.00th=[ 305], 00:19:29.142 | 70.00th=[ 317], 80.00th=[ 326], 90.00th=[ 334], 95.00th=[ 351], 00:19:29.142 | 99.00th=[ 409], 99.50th=[ 514], 99.90th=[ 558], 99.95th=[ 575], 00:19:29.142 | 99.99th=[ 575] 00:19:29.142 bw ( KiB/s): min=45056, max=80896, per=5.17%, avg=58210.25, stdev=10426.76, samples=20 00:19:29.142 iops : min= 176, max= 316, avg=227.30, stdev=40.78, samples=20 00:19:29.142 lat (msec) : 10=0.04%, 50=1.24%, 100=1.37%, 250=26.95%, 500=69.80% 00:19:29.142 lat (msec) : 750=0.60% 00:19:29.142 cpu : usr=0.46%, sys=0.78%, ctx=2485, majf=0, minf=1 00:19:29.142 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:19:29.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.142 issued rwts: total=0,2338,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.142 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.142 job4: (groupid=0, jobs=1): err= 0: pid=80758: Fri Nov 8 04:03:03 2024 00:19:29.142 write: IOPS=228, BW=57.1MiB/s (59.9MB/s)(585MiB/10239msec); 0 zone resets 00:19:29.142 slat (usec): min=19, max=87109, avg=4273.49, stdev=8159.32 00:19:29.142 clat (msec): min=89, max=519, avg=275.61, stdev=47.89 00:19:29.142 lat (msec): min=89, max=519, avg=279.88, stdev=47.90 00:19:29.142 clat percentiles (msec): 00:19:29.142 | 1.00th=[ 140], 5.00th=[ 199], 10.00th=[ 211], 20.00th=[ 230], 00:19:29.142 | 30.00th=[ 259], 40.00th=[ 271], 50.00th=[ 288], 60.00th=[ 300], 00:19:29.142 | 70.00th=[ 305], 80.00th=[ 317], 90.00th=[ 321], 95.00th=[ 321], 00:19:29.142 | 99.00th=[ 397], 99.50th=[ 456], 99.90th=[ 498], 99.95th=[ 518], 00:19:29.142 | 99.99th=[ 518] 00:19:29.142 bw ( KiB/s): min=51097, max=75776, per=5.17%, avg=58256.25, stdev=8059.02, samples=20 00:19:29.142 iops : min= 199, max= 296, avg=227.40, stdev=31.51, samples=20 00:19:29.142 lat (msec) : 100=0.17%, 250=23.46%, 500=76.28%, 750=0.09% 00:19:29.142 cpu : usr=0.57%, sys=0.78%, ctx=2479, majf=0, minf=1 00:19:29.142 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:19:29.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.142 issued rwts: total=0,2340,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.142 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.142 job5: (groupid=0, jobs=1): err= 0: pid=80759: Fri Nov 8 04:03:03 2024 00:19:29.142 write: IOPS=228, BW=57.0MiB/s (59.8MB/s)(584MiB/10247msec); 0 zone resets 00:19:29.142 slat (usec): min=23, max=57927, avg=4201.52, stdev=8295.90 00:19:29.142 clat (msec): min=2, max=551, avg=276.24, stdev=72.43 00:19:29.142 lat (msec): min=2, max=551, avg=280.44, stdev=73.03 00:19:29.142 clat percentiles (msec): 00:19:29.142 | 1.00th=[ 6], 5.00th=[ 188], 10.00th=[ 209], 20.00th=[ 224], 00:19:29.142 | 30.00th=[ 253], 40.00th=[ 264], 50.00th=[ 271], 60.00th=[ 300], 00:19:29.142 | 70.00th=[ 326], 80.00th=[ 342], 90.00th=[ 355], 95.00th=[ 359], 00:19:29.142 | 99.00th=[ 409], 99.50th=[ 472], 99.90th=[ 514], 99.95th=[ 550], 00:19:29.142 | 99.99th=[ 550] 00:19:29.142 bw ( KiB/s): min=47009, max=83968, per=5.17%, avg=58198.35, stdev=11173.75, samples=20 00:19:29.142 iops : min= 183, max= 328, avg=227.20, stdev=43.66, samples=20 00:19:29.142 lat (msec) : 4=0.47%, 10=2.01%, 20=0.13%, 50=0.51%, 250=24.69% 00:19:29.142 lat (msec) : 500=71.93%, 750=0.26% 00:19:29.142 cpu : usr=0.54%, sys=0.89%, ctx=2412, majf=0, minf=1 00:19:29.142 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:19:29.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.142 issued rwts: total=0,2337,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.142 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.142 job6: (groupid=0, jobs=1): err= 0: pid=80760: Fri Nov 8 04:03:03 2024 00:19:29.142 write: IOPS=260, BW=65.2MiB/s (68.3MB/s)(657MiB/10084msec); 0 zone resets 00:19:29.142 slat (usec): min=19, max=55676, avg=3685.22, stdev=7549.36 00:19:29.142 clat (msec): min=3, max=340, avg=241.71, stdev=84.85 00:19:29.142 lat (msec): min=3, max=340, avg=245.40, stdev=86.05 00:19:29.142 clat percentiles (msec): 00:19:29.142 | 1.00th=[ 13], 5.00th=[ 60], 10.00th=[ 89], 20.00th=[ 197], 00:19:29.142 | 30.00th=[ 220], 40.00th=[ 251], 50.00th=[ 264], 60.00th=[ 284], 00:19:29.142 | 70.00th=[ 300], 80.00th=[ 313], 90.00th=[ 321], 95.00th=[ 330], 00:19:29.142 | 99.00th=[ 338], 99.50th=[ 338], 99.90th=[ 342], 99.95th=[ 342], 00:19:29.142 | 99.99th=[ 342] 00:19:29.142 bw ( KiB/s): min=49152, max=185344, per=5.83%, avg=65634.80, stdev=29552.81, samples=20 00:19:29.142 iops : min= 192, max= 724, avg=256.15, stdev=115.49, samples=20 00:19:29.142 lat (msec) : 4=0.04%, 10=0.61%, 20=1.18%, 50=1.75%, 100=9.78% 00:19:29.142 lat (msec) : 250=26.21%, 500=60.44% 00:19:29.142 cpu : usr=0.71%, sys=0.78%, ctx=2966, majf=0, minf=1 00:19:29.142 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:19:29.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.142 issued rwts: total=0,2629,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.142 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.142 job7: (groupid=0, jobs=1): err= 0: pid=80761: Fri Nov 8 04:03:03 2024 00:19:29.142 write: IOPS=1101, BW=275MiB/s (289MB/s)(2777MiB/10087msec); 0 zone resets 00:19:29.142 slat (usec): min=17, max=13621, avg=870.78, stdev=1663.84 00:19:29.142 clat (usec): min=792, max=232441, avg=57216.13, stdev=25822.24 00:19:29.142 lat (usec): min=863, max=232507, avg=58086.91, stdev=26147.72 00:19:29.142 clat percentiles (msec): 00:19:29.142 | 1.00th=[ 32], 5.00th=[ 42], 10.00th=[ 43], 20.00th=[ 43], 00:19:29.142 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 45], 60.00th=[ 46], 00:19:29.142 | 70.00th=[ 47], 80.00th=[ 86], 90.00th=[ 91], 95.00th=[ 122], 00:19:29.142 | 99.00th=[ 134], 99.50th=[ 138], 99.90th=[ 213], 99.95th=[ 220], 00:19:29.142 | 99.99th=[ 228] 00:19:29.142 bw ( KiB/s): min=123392, max=375808, per=25.08%, avg=282623.10, stdev=99388.93, samples=20 00:19:29.142 iops : min= 482, max= 1468, avg=1103.95, stdev=388.27, samples=20 00:19:29.142 lat (usec) : 1000=0.05% 00:19:29.142 lat (msec) : 2=0.04%, 10=0.20%, 20=0.24%, 50=73.50%, 100=20.08% 00:19:29.142 lat (msec) : 250=5.89% 00:19:29.142 cpu : usr=1.57%, sys=3.02%, ctx=14448, majf=0, minf=1 00:19:29.142 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:19:29.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.142 issued rwts: total=0,11106,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.142 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.142 job8: (groupid=0, jobs=1): err= 0: pid=80762: Fri Nov 8 04:03:03 2024 00:19:29.142 write: IOPS=334, BW=83.7MiB/s (87.8MB/s)(858MiB/10254msec); 0 zone resets 00:19:29.142 slat (usec): min=20, max=57547, avg=2877.67, stdev=5949.99 00:19:29.142 clat (msec): min=3, max=563, avg=188.17, stdev=94.80 00:19:29.142 lat (msec): min=4, max=563, avg=191.05, stdev=96.04 00:19:29.142 clat percentiles (msec): 00:19:29.142 | 1.00th=[ 80], 5.00th=[ 123], 10.00th=[ 124], 20.00th=[ 127], 00:19:29.142 | 30.00th=[ 130], 40.00th=[ 131], 50.00th=[ 132], 60.00th=[ 133], 00:19:29.142 | 70.00th=[ 222], 80.00th=[ 317], 90.00th=[ 338], 95.00th=[ 342], 00:19:29.142 | 99.00th=[ 380], 99.50th=[ 481], 99.90th=[ 542], 99.95th=[ 567], 00:19:29.142 | 99.99th=[ 567] 00:19:29.142 bw ( KiB/s): min=45477, max=127488, per=7.65%, avg=86237.80, stdev=37870.73, samples=20 00:19:29.142 iops : min= 177, max= 498, avg=336.80, stdev=148.00, samples=20 00:19:29.142 lat (msec) : 4=0.03%, 10=0.15%, 50=0.47%, 100=0.70%, 250=69.06% 00:19:29.142 lat (msec) : 500=29.19%, 750=0.41% 00:19:29.142 cpu : usr=0.66%, sys=1.02%, ctx=4327, majf=0, minf=1 00:19:29.142 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:19:29.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.142 issued rwts: total=0,3433,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.142 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.142 job9: (groupid=0, jobs=1): err= 0: pid=80763: Fri Nov 8 04:03:03 2024 00:19:29.142 write: IOPS=219, BW=54.9MiB/s (57.6MB/s)(563MiB/10239msec); 0 zone resets 00:19:29.142 slat (usec): min=20, max=81466, avg=4439.81, stdev=8720.98 00:19:29.142 clat (msec): min=84, max=594, avg=286.64, stdev=59.51 00:19:29.142 lat (msec): min=84, max=594, avg=291.08, stdev=59.66 00:19:29.142 clat percentiles (msec): 00:19:29.142 | 1.00th=[ 133], 5.00th=[ 197], 10.00th=[ 209], 20.00th=[ 232], 00:19:29.142 | 30.00th=[ 264], 40.00th=[ 275], 50.00th=[ 296], 60.00th=[ 309], 00:19:29.142 | 70.00th=[ 321], 80.00th=[ 334], 90.00th=[ 347], 95.00th=[ 355], 00:19:29.142 | 99.00th=[ 468], 99.50th=[ 531], 99.90th=[ 575], 99.95th=[ 592], 00:19:29.142 | 99.99th=[ 592] 00:19:29.142 bw ( KiB/s): min=45056, max=80384, per=4.96%, avg=55942.30, stdev=9612.92, samples=20 00:19:29.142 iops : min= 176, max= 314, avg=218.25, stdev=37.51, samples=20 00:19:29.142 lat (msec) : 100=0.22%, 250=22.71%, 500=76.27%, 750=0.80% 00:19:29.142 cpu : usr=0.71%, sys=0.61%, ctx=1644, majf=0, minf=1 00:19:29.142 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:19:29.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.142 issued rwts: total=0,2250,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.142 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.142 job10: (groupid=0, jobs=1): err= 0: pid=80764: Fri Nov 8 04:03:03 2024 00:19:29.142 write: IOPS=267, BW=67.0MiB/s (70.2MB/s)(686MiB/10243msec); 0 zone resets 00:19:29.142 slat (usec): min=19, max=60481, avg=3601.41, stdev=6747.20 00:19:29.142 clat (msec): min=62, max=526, avg=235.18, stdev=71.09 00:19:29.142 lat (msec): min=62, max=526, avg=238.78, stdev=71.94 00:19:29.142 clat percentiles (msec): 00:19:29.142 | 1.00th=[ 103], 5.00th=[ 125], 10.00th=[ 132], 20.00th=[ 142], 00:19:29.142 | 30.00th=[ 205], 40.00th=[ 215], 50.00th=[ 228], 60.00th=[ 275], 00:19:29.142 | 70.00th=[ 296], 80.00th=[ 305], 90.00th=[ 317], 95.00th=[ 317], 00:19:29.142 | 99.00th=[ 384], 99.50th=[ 460], 99.90th=[ 506], 99.95th=[ 527], 00:19:29.143 | 99.99th=[ 527] 00:19:29.143 bw ( KiB/s): min=51097, max=122880, per=6.09%, avg=68590.00, stdev=20848.12, samples=20 00:19:29.143 iops : min= 199, max= 480, avg=267.70, stdev=81.41, samples=20 00:19:29.143 lat (msec) : 100=0.91%, 250=53.24%, 500=45.63%, 750=0.22% 00:19:29.143 cpu : usr=0.65%, sys=0.80%, ctx=2590, majf=0, minf=1 00:19:29.143 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:19:29.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.143 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.143 issued rwts: total=0,2744,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.143 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.143 00:19:29.143 Run status group 0 (all jobs): 00:19:29.143 WRITE: bw=1100MiB/s (1154MB/s), 54.9MiB/s-275MiB/s (57.6MB/s-289MB/s), io=11.0GiB (11.8GB), run=10084-10254msec 00:19:29.143 00:19:29.143 Disk stats (read/write): 00:19:29.143 nvme0n1: ios=49/12015, merge=0/0, ticks=46/1213398, in_queue=1213444, util=97.72% 00:19:29.143 nvme10n1: ios=49/7530, merge=0/0, ticks=41/1235569, in_queue=1235610, util=97.85% 00:19:29.143 nvme1n1: ios=33/12005, merge=0/0, ticks=28/1212975, in_queue=1213003, util=97.90% 00:19:29.143 nvme2n1: ios=18/4658, merge=0/0, ticks=138/1233340, in_queue=1233478, util=98.29% 00:19:29.143 nvme3n1: ios=8/4656, merge=0/0, ticks=8/1232638, in_queue=1232646, util=97.86% 00:19:29.143 nvme4n1: ios=0/4656, merge=0/0, ticks=0/1233659, in_queue=1233659, util=98.20% 00:19:29.143 nvme5n1: ios=0/5083, merge=0/0, ticks=0/1212923, in_queue=1212923, util=98.24% 00:19:29.143 nvme6n1: ios=0/22048, merge=0/0, ticks=0/1215676, in_queue=1215676, util=98.43% 00:19:29.143 nvme7n1: ios=0/6849, merge=0/0, ticks=0/1236238, in_queue=1236238, util=98.75% 00:19:29.143 nvme8n1: ios=0/4484, merge=0/0, ticks=0/1232645, in_queue=1232645, util=98.75% 00:19:29.143 nvme9n1: ios=0/5466, merge=0/0, ticks=0/1235895, in_queue=1235895, util=98.83% 00:19:29.143 04:03:03 -- target/multiconnection.sh@36 -- # sync 00:19:29.143 04:03:03 -- target/multiconnection.sh@37 -- # seq 1 11 00:19:29.143 04:03:03 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.143 04:03:03 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:29.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:29.143 04:03:03 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:19:29.143 04:03:03 -- common/autotest_common.sh@1208 -- # local i=0 00:19:29.143 04:03:03 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:29.143 04:03:03 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:19:29.143 04:03:03 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:29.143 04:03:03 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:19:29.143 04:03:03 -- common/autotest_common.sh@1220 -- # return 0 00:19:29.143 04:03:03 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:29.143 04:03:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.143 04:03:03 -- common/autotest_common.sh@10 -- # set +x 00:19:29.143 04:03:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.143 04:03:03 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.143 04:03:03 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:19:29.143 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:19:29.143 04:03:03 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:19:29.143 04:03:03 -- common/autotest_common.sh@1208 -- # local i=0 00:19:29.143 04:03:03 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:29.143 04:03:03 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:19:29.143 04:03:03 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:29.143 04:03:03 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:19:29.143 04:03:03 -- common/autotest_common.sh@1220 -- # return 0 00:19:29.143 04:03:03 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:29.143 04:03:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.143 04:03:03 -- common/autotest_common.sh@10 -- # set +x 00:19:29.143 04:03:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.143 04:03:03 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.143 04:03:03 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:19:29.143 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:19:29.143 04:03:03 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:19:29.143 04:03:03 -- common/autotest_common.sh@1208 -- # local i=0 00:19:29.143 04:03:03 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:29.143 04:03:03 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:19:29.143 04:03:03 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:19:29.143 04:03:03 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:29.143 04:03:03 -- common/autotest_common.sh@1220 -- # return 0 00:19:29.143 04:03:03 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:29.143 04:03:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.143 04:03:03 -- common/autotest_common.sh@10 -- # set +x 00:19:29.143 04:03:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.143 04:03:03 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.143 04:03:03 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:19:29.143 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:19:29.143 04:03:03 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:19:29.143 04:03:03 -- common/autotest_common.sh@1208 -- # local i=0 00:19:29.143 04:03:03 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:29.143 04:03:03 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:19:29.143 04:03:03 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:29.143 04:03:03 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:19:29.143 04:03:03 -- common/autotest_common.sh@1220 -- # return 0 00:19:29.143 04:03:03 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:19:29.143 04:03:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.143 04:03:03 -- common/autotest_common.sh@10 -- # set +x 00:19:29.143 04:03:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.143 04:03:03 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.143 04:03:03 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:19:29.143 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:19:29.143 04:03:04 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:19:29.143 04:03:04 -- common/autotest_common.sh@1208 -- # local i=0 00:19:29.143 04:03:04 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:29.143 04:03:04 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:19:29.143 04:03:04 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:29.143 04:03:04 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:19:29.143 04:03:04 -- common/autotest_common.sh@1220 -- # return 0 00:19:29.143 04:03:04 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:19:29.143 04:03:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.143 04:03:04 -- common/autotest_common.sh@10 -- # set +x 00:19:29.143 04:03:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.143 04:03:04 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.143 04:03:04 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:19:29.143 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:19:29.143 04:03:04 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:19:29.143 04:03:04 -- common/autotest_common.sh@1208 -- # local i=0 00:19:29.143 04:03:04 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:29.143 04:03:04 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:19:29.143 04:03:04 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:29.143 04:03:04 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:19:29.143 04:03:04 -- common/autotest_common.sh@1220 -- # return 0 00:19:29.143 04:03:04 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:19:29.143 04:03:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.143 04:03:04 -- common/autotest_common.sh@10 -- # set +x 00:19:29.143 04:03:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.143 04:03:04 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.143 04:03:04 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:19:29.143 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:19:29.143 04:03:04 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:19:29.143 04:03:04 -- common/autotest_common.sh@1208 -- # local i=0 00:19:29.143 04:03:04 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:29.143 04:03:04 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:19:29.143 04:03:04 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:19:29.143 04:03:04 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:29.143 04:03:04 -- common/autotest_common.sh@1220 -- # return 0 00:19:29.143 04:03:04 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:19:29.143 04:03:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.143 04:03:04 -- common/autotest_common.sh@10 -- # set +x 00:19:29.143 04:03:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.143 04:03:04 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.143 04:03:04 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:19:29.403 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:19:29.403 04:03:04 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:19:29.403 04:03:04 -- common/autotest_common.sh@1208 -- # local i=0 00:19:29.403 04:03:04 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:29.403 04:03:04 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:19:29.403 04:03:04 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:19:29.403 04:03:04 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:29.403 04:03:04 -- common/autotest_common.sh@1220 -- # return 0 00:19:29.403 04:03:04 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:19:29.403 04:03:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.403 04:03:04 -- common/autotest_common.sh@10 -- # set +x 00:19:29.403 04:03:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.403 04:03:04 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.403 04:03:04 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:19:29.403 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:19:29.403 04:03:04 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:19:29.403 04:03:04 -- common/autotest_common.sh@1208 -- # local i=0 00:19:29.403 04:03:04 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:29.403 04:03:04 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:19:29.403 04:03:04 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:19:29.403 04:03:04 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:29.403 04:03:04 -- common/autotest_common.sh@1220 -- # return 0 00:19:29.403 04:03:04 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:19:29.403 04:03:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.403 04:03:04 -- common/autotest_common.sh@10 -- # set +x 00:19:29.403 04:03:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.403 04:03:04 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.403 04:03:04 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:19:29.662 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:19:29.662 04:03:04 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:19:29.662 04:03:04 -- common/autotest_common.sh@1208 -- # local i=0 00:19:29.662 04:03:04 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:29.662 04:03:04 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:19:29.662 04:03:04 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:19:29.662 04:03:04 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:29.662 04:03:04 -- common/autotest_common.sh@1220 -- # return 0 00:19:29.662 04:03:04 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:19:29.662 04:03:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.662 04:03:04 -- common/autotest_common.sh@10 -- # set +x 00:19:29.662 04:03:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.662 04:03:04 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.662 04:03:04 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:19:29.662 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:19:29.662 04:03:04 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:19:29.662 04:03:04 -- common/autotest_common.sh@1208 -- # local i=0 00:19:29.662 04:03:04 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:29.662 04:03:04 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:19:29.662 04:03:04 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:19:29.662 04:03:04 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:29.662 04:03:04 -- common/autotest_common.sh@1220 -- # return 0 00:19:29.662 04:03:04 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:19:29.662 04:03:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.662 04:03:04 -- common/autotest_common.sh@10 -- # set +x 00:19:29.662 04:03:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.662 04:03:04 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:19:29.662 04:03:04 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:19:29.662 04:03:04 -- target/multiconnection.sh@47 -- # nvmftestfini 00:19:29.662 04:03:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:29.662 04:03:04 -- nvmf/common.sh@116 -- # sync 00:19:29.662 04:03:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:29.662 04:03:04 -- nvmf/common.sh@119 -- # set +e 00:19:29.662 04:03:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:29.662 04:03:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:29.662 rmmod nvme_tcp 00:19:29.662 rmmod nvme_fabrics 00:19:29.662 rmmod nvme_keyring 00:19:29.662 04:03:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:29.662 04:03:04 -- nvmf/common.sh@123 -- # set -e 00:19:29.662 04:03:04 -- nvmf/common.sh@124 -- # return 0 00:19:29.662 04:03:04 -- nvmf/common.sh@477 -- # '[' -n 80050 ']' 00:19:29.662 04:03:04 -- nvmf/common.sh@478 -- # killprocess 80050 00:19:29.662 04:03:04 -- common/autotest_common.sh@936 -- # '[' -z 80050 ']' 00:19:29.662 04:03:04 -- common/autotest_common.sh@940 -- # kill -0 80050 00:19:29.662 04:03:04 -- common/autotest_common.sh@941 -- # uname 00:19:29.662 04:03:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:29.662 04:03:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80050 00:19:29.662 killing process with pid 80050 00:19:29.662 04:03:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:29.662 04:03:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:29.662 04:03:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80050' 00:19:29.662 04:03:04 -- common/autotest_common.sh@955 -- # kill 80050 00:19:29.662 04:03:04 -- common/autotest_common.sh@960 -- # wait 80050 00:19:30.229 04:03:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:30.229 04:03:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:30.229 04:03:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:30.229 04:03:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:30.229 04:03:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:30.229 04:03:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.229 04:03:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:30.229 04:03:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.229 04:03:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:30.229 00:19:30.229 real 0m50.363s 00:19:30.229 user 2m56.332s 00:19:30.229 sys 0m20.268s 00:19:30.229 04:03:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:30.229 04:03:05 -- common/autotest_common.sh@10 -- # set +x 00:19:30.229 ************************************ 00:19:30.229 END TEST nvmf_multiconnection 00:19:30.229 ************************************ 00:19:30.488 04:03:05 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:30.488 04:03:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:30.488 04:03:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:30.488 04:03:05 -- common/autotest_common.sh@10 -- # set +x 00:19:30.488 ************************************ 00:19:30.488 START TEST nvmf_initiator_timeout 00:19:30.488 ************************************ 00:19:30.488 04:03:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:30.488 * Looking for test storage... 00:19:30.488 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:30.488 04:03:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:30.488 04:03:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:30.488 04:03:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:30.488 04:03:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:30.488 04:03:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:30.488 04:03:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:30.488 04:03:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:30.488 04:03:05 -- scripts/common.sh@335 -- # IFS=.-: 00:19:30.488 04:03:05 -- scripts/common.sh@335 -- # read -ra ver1 00:19:30.488 04:03:05 -- scripts/common.sh@336 -- # IFS=.-: 00:19:30.488 04:03:05 -- scripts/common.sh@336 -- # read -ra ver2 00:19:30.488 04:03:05 -- scripts/common.sh@337 -- # local 'op=<' 00:19:30.488 04:03:05 -- scripts/common.sh@339 -- # ver1_l=2 00:19:30.488 04:03:05 -- scripts/common.sh@340 -- # ver2_l=1 00:19:30.488 04:03:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:30.488 04:03:05 -- scripts/common.sh@343 -- # case "$op" in 00:19:30.488 04:03:05 -- scripts/common.sh@344 -- # : 1 00:19:30.488 04:03:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:30.488 04:03:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:30.488 04:03:05 -- scripts/common.sh@364 -- # decimal 1 00:19:30.488 04:03:05 -- scripts/common.sh@352 -- # local d=1 00:19:30.488 04:03:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:30.488 04:03:05 -- scripts/common.sh@354 -- # echo 1 00:19:30.488 04:03:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:30.488 04:03:05 -- scripts/common.sh@365 -- # decimal 2 00:19:30.488 04:03:05 -- scripts/common.sh@352 -- # local d=2 00:19:30.488 04:03:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:30.488 04:03:05 -- scripts/common.sh@354 -- # echo 2 00:19:30.488 04:03:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:30.488 04:03:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:30.488 04:03:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:30.488 04:03:05 -- scripts/common.sh@367 -- # return 0 00:19:30.488 04:03:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:30.488 04:03:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:30.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.488 --rc genhtml_branch_coverage=1 00:19:30.488 --rc genhtml_function_coverage=1 00:19:30.488 --rc genhtml_legend=1 00:19:30.488 --rc geninfo_all_blocks=1 00:19:30.488 --rc geninfo_unexecuted_blocks=1 00:19:30.488 00:19:30.488 ' 00:19:30.488 04:03:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:30.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.488 --rc genhtml_branch_coverage=1 00:19:30.488 --rc genhtml_function_coverage=1 00:19:30.488 --rc genhtml_legend=1 00:19:30.488 --rc geninfo_all_blocks=1 00:19:30.488 --rc geninfo_unexecuted_blocks=1 00:19:30.488 00:19:30.488 ' 00:19:30.488 04:03:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:30.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.488 --rc genhtml_branch_coverage=1 00:19:30.488 --rc genhtml_function_coverage=1 00:19:30.488 --rc genhtml_legend=1 00:19:30.488 --rc geninfo_all_blocks=1 00:19:30.488 --rc geninfo_unexecuted_blocks=1 00:19:30.488 00:19:30.488 ' 00:19:30.488 04:03:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:30.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.488 --rc genhtml_branch_coverage=1 00:19:30.488 --rc genhtml_function_coverage=1 00:19:30.488 --rc genhtml_legend=1 00:19:30.488 --rc geninfo_all_blocks=1 00:19:30.488 --rc geninfo_unexecuted_blocks=1 00:19:30.488 00:19:30.488 ' 00:19:30.488 04:03:05 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:30.488 04:03:05 -- nvmf/common.sh@7 -- # uname -s 00:19:30.488 04:03:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:30.488 04:03:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:30.488 04:03:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:30.488 04:03:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:30.488 04:03:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:30.488 04:03:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:30.488 04:03:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:30.488 04:03:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:30.488 04:03:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:30.488 04:03:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:30.488 04:03:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:19:30.488 04:03:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:19:30.488 04:03:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:30.488 04:03:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:30.488 04:03:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:30.488 04:03:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:30.488 04:03:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:30.488 04:03:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:30.488 04:03:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:30.488 04:03:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.488 04:03:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.488 04:03:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.488 04:03:05 -- paths/export.sh@5 -- # export PATH 00:19:30.488 04:03:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.488 04:03:05 -- nvmf/common.sh@46 -- # : 0 00:19:30.488 04:03:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:30.488 04:03:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:30.488 04:03:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:30.488 04:03:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:30.488 04:03:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:30.488 04:03:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:30.488 04:03:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:30.488 04:03:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:30.488 04:03:05 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:30.488 04:03:05 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:30.488 04:03:05 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:19:30.488 04:03:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:30.488 04:03:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:30.488 04:03:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:30.488 04:03:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:30.488 04:03:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:30.488 04:03:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.488 04:03:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:30.488 04:03:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.488 04:03:05 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:30.488 04:03:05 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:30.488 04:03:05 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:30.488 04:03:05 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:30.488 04:03:05 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:30.488 04:03:05 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:30.488 04:03:05 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:30.488 04:03:05 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:30.488 04:03:05 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:30.488 04:03:05 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:30.488 04:03:05 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:30.488 04:03:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:30.488 04:03:05 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:30.488 04:03:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:30.488 04:03:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:30.488 04:03:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:30.488 04:03:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:30.488 04:03:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:30.488 04:03:05 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:30.747 04:03:05 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:30.747 Cannot find device "nvmf_tgt_br" 00:19:30.747 04:03:05 -- nvmf/common.sh@154 -- # true 00:19:30.747 04:03:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:30.747 Cannot find device "nvmf_tgt_br2" 00:19:30.747 04:03:05 -- nvmf/common.sh@155 -- # true 00:19:30.747 04:03:05 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:30.747 04:03:05 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:30.747 Cannot find device "nvmf_tgt_br" 00:19:30.747 04:03:05 -- nvmf/common.sh@157 -- # true 00:19:30.747 04:03:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:30.747 Cannot find device "nvmf_tgt_br2" 00:19:30.747 04:03:05 -- nvmf/common.sh@158 -- # true 00:19:30.747 04:03:05 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:30.747 04:03:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:30.747 04:03:05 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:30.747 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:30.747 04:03:05 -- nvmf/common.sh@161 -- # true 00:19:30.747 04:03:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:30.747 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:30.747 04:03:05 -- nvmf/common.sh@162 -- # true 00:19:30.747 04:03:05 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:30.747 04:03:05 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:30.747 04:03:05 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:30.747 04:03:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:30.747 04:03:05 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:30.747 04:03:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:30.747 04:03:05 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:30.747 04:03:05 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:30.747 04:03:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:30.747 04:03:05 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:30.747 04:03:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:30.747 04:03:05 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:30.747 04:03:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:30.747 04:03:05 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:30.747 04:03:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:30.747 04:03:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:30.747 04:03:05 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:30.747 04:03:05 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:30.747 04:03:05 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:30.747 04:03:05 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:31.006 04:03:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:31.006 04:03:05 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:31.006 04:03:05 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:31.006 04:03:05 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:31.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:31.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:19:31.006 00:19:31.006 --- 10.0.0.2 ping statistics --- 00:19:31.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.006 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:19:31.006 04:03:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:31.006 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:31.006 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:19:31.006 00:19:31.006 --- 10.0.0.3 ping statistics --- 00:19:31.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.006 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:19:31.006 04:03:05 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:31.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:31.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:19:31.006 00:19:31.006 --- 10.0.0.1 ping statistics --- 00:19:31.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.006 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:19:31.006 04:03:05 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:31.006 04:03:05 -- nvmf/common.sh@421 -- # return 0 00:19:31.006 04:03:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:31.006 04:03:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:31.006 04:03:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:31.006 04:03:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:31.006 04:03:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:31.006 04:03:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:31.006 04:03:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:31.006 04:03:05 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:19:31.006 04:03:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:31.006 04:03:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:31.006 04:03:05 -- common/autotest_common.sh@10 -- # set +x 00:19:31.006 04:03:05 -- nvmf/common.sh@469 -- # nvmfpid=81139 00:19:31.006 04:03:05 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:31.006 04:03:05 -- nvmf/common.sh@470 -- # waitforlisten 81139 00:19:31.006 04:03:05 -- common/autotest_common.sh@829 -- # '[' -z 81139 ']' 00:19:31.006 04:03:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.006 04:03:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:31.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.006 04:03:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.006 04:03:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:31.006 04:03:05 -- common/autotest_common.sh@10 -- # set +x 00:19:31.006 [2024-11-08 04:03:05.989932] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:31.006 [2024-11-08 04:03:05.990032] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:31.266 [2024-11-08 04:03:06.134056] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:31.266 [2024-11-08 04:03:06.241883] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:31.266 [2024-11-08 04:03:06.242067] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:31.266 [2024-11-08 04:03:06.242084] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:31.266 [2024-11-08 04:03:06.242096] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:31.266 [2024-11-08 04:03:06.242507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.266 [2024-11-08 04:03:06.242602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.266 [2024-11-08 04:03:06.242754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.266 [2024-11-08 04:03:06.242746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:32.203 04:03:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:32.203 04:03:07 -- common/autotest_common.sh@862 -- # return 0 00:19:32.203 04:03:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:32.203 04:03:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:32.203 04:03:07 -- common/autotest_common.sh@10 -- # set +x 00:19:32.203 04:03:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:32.203 04:03:07 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:32.203 04:03:07 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:32.203 04:03:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.203 04:03:07 -- common/autotest_common.sh@10 -- # set +x 00:19:32.203 Malloc0 00:19:32.203 04:03:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.203 04:03:07 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:19:32.203 04:03:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.203 04:03:07 -- common/autotest_common.sh@10 -- # set +x 00:19:32.203 Delay0 00:19:32.203 04:03:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.203 04:03:07 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:32.203 04:03:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.203 04:03:07 -- common/autotest_common.sh@10 -- # set +x 00:19:32.203 [2024-11-08 04:03:07.122942] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:32.203 04:03:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.203 04:03:07 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:32.203 04:03:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.203 04:03:07 -- common/autotest_common.sh@10 -- # set +x 00:19:32.203 04:03:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.203 04:03:07 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:32.203 04:03:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.203 04:03:07 -- common/autotest_common.sh@10 -- # set +x 00:19:32.203 04:03:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.203 04:03:07 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:32.203 04:03:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.203 04:03:07 -- common/autotest_common.sh@10 -- # set +x 00:19:32.203 [2024-11-08 04:03:07.155196] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:32.203 04:03:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.203 04:03:07 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:32.462 04:03:07 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:19:32.462 04:03:07 -- common/autotest_common.sh@1187 -- # local i=0 00:19:32.462 04:03:07 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:32.462 04:03:07 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:32.462 04:03:07 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:34.365 04:03:09 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:34.365 04:03:09 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:34.365 04:03:09 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:19:34.365 04:03:09 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:34.365 04:03:09 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:34.365 04:03:09 -- common/autotest_common.sh@1197 -- # return 0 00:19:34.365 04:03:09 -- target/initiator_timeout.sh@35 -- # fio_pid=81218 00:19:34.365 04:03:09 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:19:34.365 04:03:09 -- target/initiator_timeout.sh@37 -- # sleep 3 00:19:34.365 [global] 00:19:34.365 thread=1 00:19:34.365 invalidate=1 00:19:34.365 rw=write 00:19:34.365 time_based=1 00:19:34.365 runtime=60 00:19:34.365 ioengine=libaio 00:19:34.365 direct=1 00:19:34.365 bs=4096 00:19:34.365 iodepth=1 00:19:34.365 norandommap=0 00:19:34.365 numjobs=1 00:19:34.365 00:19:34.365 verify_dump=1 00:19:34.365 verify_backlog=512 00:19:34.365 verify_state_save=0 00:19:34.365 do_verify=1 00:19:34.365 verify=crc32c-intel 00:19:34.365 [job0] 00:19:34.365 filename=/dev/nvme0n1 00:19:34.365 Could not set queue depth (nvme0n1) 00:19:34.624 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:34.624 fio-3.35 00:19:34.624 Starting 1 thread 00:19:37.908 04:03:12 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:19:37.908 04:03:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.908 04:03:12 -- common/autotest_common.sh@10 -- # set +x 00:19:37.908 true 00:19:37.908 04:03:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.908 04:03:12 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:19:37.908 04:03:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.908 04:03:12 -- common/autotest_common.sh@10 -- # set +x 00:19:37.908 true 00:19:37.908 04:03:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.908 04:03:12 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:19:37.908 04:03:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.908 04:03:12 -- common/autotest_common.sh@10 -- # set +x 00:19:37.908 true 00:19:37.908 04:03:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.908 04:03:12 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:19:37.908 04:03:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.908 04:03:12 -- common/autotest_common.sh@10 -- # set +x 00:19:37.908 true 00:19:37.908 04:03:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.908 04:03:12 -- target/initiator_timeout.sh@45 -- # sleep 3 00:19:40.441 04:03:15 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:19:40.441 04:03:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.441 04:03:15 -- common/autotest_common.sh@10 -- # set +x 00:19:40.441 true 00:19:40.441 04:03:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.441 04:03:15 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:19:40.441 04:03:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.441 04:03:15 -- common/autotest_common.sh@10 -- # set +x 00:19:40.441 true 00:19:40.441 04:03:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.441 04:03:15 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:19:40.441 04:03:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.441 04:03:15 -- common/autotest_common.sh@10 -- # set +x 00:19:40.441 true 00:19:40.441 04:03:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.441 04:03:15 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:19:40.441 04:03:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.441 04:03:15 -- common/autotest_common.sh@10 -- # set +x 00:19:40.441 true 00:19:40.441 04:03:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.441 04:03:15 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:19:40.441 04:03:15 -- target/initiator_timeout.sh@54 -- # wait 81218 00:20:36.703 00:20:36.703 job0: (groupid=0, jobs=1): err= 0: pid=81242: Fri Nov 8 04:04:09 2024 00:20:36.703 read: IOPS=853, BW=3413KiB/s (3495kB/s)(200MiB/60000msec) 00:20:36.703 slat (nsec): min=12429, max=89589, avg=14125.40, stdev=3175.65 00:20:36.703 clat (usec): min=149, max=643, avg=189.76, stdev=16.99 00:20:36.703 lat (usec): min=163, max=701, avg=203.88, stdev=17.49 00:20:36.703 clat percentiles (usec): 00:20:36.703 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 178], 00:20:36.703 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 190], 00:20:36.703 | 70.00th=[ 196], 80.00th=[ 200], 90.00th=[ 210], 95.00th=[ 221], 00:20:36.704 | 99.00th=[ 243], 99.50th=[ 253], 99.90th=[ 285], 99.95th=[ 302], 00:20:36.704 | 99.99th=[ 515] 00:20:36.704 write: IOPS=861, BW=3444KiB/s (3527kB/s)(202MiB/60000msec); 0 zone resets 00:20:36.704 slat (usec): min=18, max=13505, avg=21.82, stdev=68.77 00:20:36.704 clat (usec): min=117, max=40589k, avg=934.63, stdev=178570.65 00:20:36.704 lat (usec): min=138, max=40589k, avg=956.45, stdev=178570.74 00:20:36.704 clat percentiles (usec): 00:20:36.704 | 1.00th=[ 128], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 139], 00:20:36.704 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 149], 00:20:36.704 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 165], 95.00th=[ 174], 00:20:36.704 | 99.00th=[ 198], 99.50th=[ 208], 99.90th=[ 237], 99.95th=[ 269], 00:20:36.704 | 99.99th=[ 750] 00:20:36.704 bw ( KiB/s): min= 3152, max=12288, per=100.00%, avg=10292.51, stdev=1908.19, samples=39 00:20:36.704 iops : min= 788, max= 3072, avg=2573.13, stdev=477.05, samples=39 00:20:36.704 lat (usec) : 250=99.67%, 500=0.31%, 750=0.01%, 1000=0.01% 00:20:36.704 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:20:36.704 cpu : usr=0.53%, sys=2.19%, ctx=102925, majf=0, minf=5 00:20:36.704 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:36.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.704 issued rwts: total=51200,51664,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.704 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:36.704 00:20:36.704 Run status group 0 (all jobs): 00:20:36.704 READ: bw=3413KiB/s (3495kB/s), 3413KiB/s-3413KiB/s (3495kB/s-3495kB/s), io=200MiB (210MB), run=60000-60000msec 00:20:36.704 WRITE: bw=3444KiB/s (3527kB/s), 3444KiB/s-3444KiB/s (3527kB/s-3527kB/s), io=202MiB (212MB), run=60000-60000msec 00:20:36.704 00:20:36.704 Disk stats (read/write): 00:20:36.704 nvme0n1: ios=51368/51200, merge=0/0, ticks=10353/8227, in_queue=18580, util=99.56% 00:20:36.704 04:04:09 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:36.704 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:36.704 04:04:09 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:36.704 04:04:09 -- common/autotest_common.sh@1208 -- # local i=0 00:20:36.704 04:04:09 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:20:36.704 04:04:09 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:36.704 04:04:09 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:36.704 04:04:09 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:20:36.704 04:04:09 -- common/autotest_common.sh@1220 -- # return 0 00:20:36.704 04:04:09 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:20:36.704 nvmf hotplug test: fio successful as expected 00:20:36.704 04:04:09 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:20:36.704 04:04:09 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:36.704 04:04:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.704 04:04:09 -- common/autotest_common.sh@10 -- # set +x 00:20:36.704 04:04:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.704 04:04:09 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:20:36.704 04:04:09 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:20:36.704 04:04:09 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:20:36.704 04:04:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:36.704 04:04:09 -- nvmf/common.sh@116 -- # sync 00:20:36.704 04:04:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:36.704 04:04:09 -- nvmf/common.sh@119 -- # set +e 00:20:36.704 04:04:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:36.704 04:04:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:36.704 rmmod nvme_tcp 00:20:36.704 rmmod nvme_fabrics 00:20:36.704 rmmod nvme_keyring 00:20:36.704 04:04:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:36.704 04:04:09 -- nvmf/common.sh@123 -- # set -e 00:20:36.704 04:04:09 -- nvmf/common.sh@124 -- # return 0 00:20:36.704 04:04:09 -- nvmf/common.sh@477 -- # '[' -n 81139 ']' 00:20:36.704 04:04:09 -- nvmf/common.sh@478 -- # killprocess 81139 00:20:36.704 04:04:09 -- common/autotest_common.sh@936 -- # '[' -z 81139 ']' 00:20:36.704 04:04:09 -- common/autotest_common.sh@940 -- # kill -0 81139 00:20:36.704 04:04:09 -- common/autotest_common.sh@941 -- # uname 00:20:36.704 04:04:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:36.704 04:04:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81139 00:20:36.704 04:04:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:36.704 04:04:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:36.704 killing process with pid 81139 00:20:36.704 04:04:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81139' 00:20:36.704 04:04:09 -- common/autotest_common.sh@955 -- # kill 81139 00:20:36.704 04:04:09 -- common/autotest_common.sh@960 -- # wait 81139 00:20:36.704 04:04:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:36.704 04:04:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:36.704 04:04:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:36.704 04:04:10 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:36.704 04:04:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:36.704 04:04:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.704 04:04:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:36.704 04:04:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.704 04:04:10 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:36.704 ************************************ 00:20:36.704 END TEST nvmf_initiator_timeout 00:20:36.704 ************************************ 00:20:36.704 00:20:36.704 real 1m4.860s 00:20:36.704 user 4m8.023s 00:20:36.704 sys 0m7.614s 00:20:36.704 04:04:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:36.704 04:04:10 -- common/autotest_common.sh@10 -- # set +x 00:20:36.704 04:04:10 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:20:36.704 04:04:10 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:20:36.704 04:04:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:36.704 04:04:10 -- common/autotest_common.sh@10 -- # set +x 00:20:36.704 04:04:10 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:20:36.704 04:04:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:36.704 04:04:10 -- common/autotest_common.sh@10 -- # set +x 00:20:36.704 04:04:10 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:20:36.704 04:04:10 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:36.704 04:04:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:36.704 04:04:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:36.704 04:04:10 -- common/autotest_common.sh@10 -- # set +x 00:20:36.704 ************************************ 00:20:36.704 START TEST nvmf_multicontroller 00:20:36.704 ************************************ 00:20:36.704 04:04:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:36.704 * Looking for test storage... 00:20:36.704 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:36.704 04:04:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:36.704 04:04:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:36.704 04:04:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:36.704 04:04:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:36.704 04:04:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:36.704 04:04:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:36.704 04:04:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:36.704 04:04:10 -- scripts/common.sh@335 -- # IFS=.-: 00:20:36.704 04:04:10 -- scripts/common.sh@335 -- # read -ra ver1 00:20:36.704 04:04:10 -- scripts/common.sh@336 -- # IFS=.-: 00:20:36.704 04:04:10 -- scripts/common.sh@336 -- # read -ra ver2 00:20:36.704 04:04:10 -- scripts/common.sh@337 -- # local 'op=<' 00:20:36.704 04:04:10 -- scripts/common.sh@339 -- # ver1_l=2 00:20:36.704 04:04:10 -- scripts/common.sh@340 -- # ver2_l=1 00:20:36.704 04:04:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:36.704 04:04:10 -- scripts/common.sh@343 -- # case "$op" in 00:20:36.704 04:04:10 -- scripts/common.sh@344 -- # : 1 00:20:36.704 04:04:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:36.704 04:04:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:36.704 04:04:10 -- scripts/common.sh@364 -- # decimal 1 00:20:36.704 04:04:10 -- scripts/common.sh@352 -- # local d=1 00:20:36.704 04:04:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:36.704 04:04:10 -- scripts/common.sh@354 -- # echo 1 00:20:36.704 04:04:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:36.704 04:04:10 -- scripts/common.sh@365 -- # decimal 2 00:20:36.704 04:04:10 -- scripts/common.sh@352 -- # local d=2 00:20:36.704 04:04:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:36.704 04:04:10 -- scripts/common.sh@354 -- # echo 2 00:20:36.704 04:04:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:36.704 04:04:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:36.704 04:04:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:36.704 04:04:10 -- scripts/common.sh@367 -- # return 0 00:20:36.704 04:04:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:36.704 04:04:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:36.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.704 --rc genhtml_branch_coverage=1 00:20:36.704 --rc genhtml_function_coverage=1 00:20:36.704 --rc genhtml_legend=1 00:20:36.704 --rc geninfo_all_blocks=1 00:20:36.704 --rc geninfo_unexecuted_blocks=1 00:20:36.704 00:20:36.704 ' 00:20:36.704 04:04:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:36.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.704 --rc genhtml_branch_coverage=1 00:20:36.704 --rc genhtml_function_coverage=1 00:20:36.704 --rc genhtml_legend=1 00:20:36.704 --rc geninfo_all_blocks=1 00:20:36.704 --rc geninfo_unexecuted_blocks=1 00:20:36.704 00:20:36.704 ' 00:20:36.704 04:04:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:36.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.704 --rc genhtml_branch_coverage=1 00:20:36.704 --rc genhtml_function_coverage=1 00:20:36.704 --rc genhtml_legend=1 00:20:36.704 --rc geninfo_all_blocks=1 00:20:36.704 --rc geninfo_unexecuted_blocks=1 00:20:36.704 00:20:36.704 ' 00:20:36.704 04:04:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:36.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.705 --rc genhtml_branch_coverage=1 00:20:36.705 --rc genhtml_function_coverage=1 00:20:36.705 --rc genhtml_legend=1 00:20:36.705 --rc geninfo_all_blocks=1 00:20:36.705 --rc geninfo_unexecuted_blocks=1 00:20:36.705 00:20:36.705 ' 00:20:36.705 04:04:10 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:36.705 04:04:10 -- nvmf/common.sh@7 -- # uname -s 00:20:36.705 04:04:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:36.705 04:04:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:36.705 04:04:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:36.705 04:04:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:36.705 04:04:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:36.705 04:04:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:36.705 04:04:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:36.705 04:04:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:36.705 04:04:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:36.705 04:04:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:36.705 04:04:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:20:36.705 04:04:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:20:36.705 04:04:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:36.705 04:04:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:36.705 04:04:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:36.705 04:04:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:36.705 04:04:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:36.705 04:04:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:36.705 04:04:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:36.705 04:04:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.705 04:04:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.705 04:04:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.705 04:04:10 -- paths/export.sh@5 -- # export PATH 00:20:36.705 04:04:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.705 04:04:10 -- nvmf/common.sh@46 -- # : 0 00:20:36.705 04:04:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:36.705 04:04:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:36.705 04:04:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:36.705 04:04:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:36.705 04:04:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:36.705 04:04:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:36.705 04:04:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:36.705 04:04:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:36.705 04:04:10 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:36.705 04:04:10 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:36.705 04:04:10 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:36.705 04:04:10 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:36.705 04:04:10 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:36.705 04:04:10 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:36.705 04:04:10 -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:36.705 04:04:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:36.705 04:04:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:36.705 04:04:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:36.705 04:04:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:36.705 04:04:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:36.705 04:04:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.705 04:04:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:36.705 04:04:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.705 04:04:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:36.705 04:04:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:36.705 04:04:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:36.705 04:04:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:36.705 04:04:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:36.705 04:04:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:36.705 04:04:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:36.705 04:04:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:36.705 04:04:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:36.705 04:04:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:36.705 04:04:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:36.705 04:04:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:36.705 04:04:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:36.705 04:04:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:36.705 04:04:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:36.705 04:04:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:36.705 04:04:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:36.705 04:04:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:36.705 04:04:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:36.705 04:04:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:36.705 Cannot find device "nvmf_tgt_br" 00:20:36.705 04:04:10 -- nvmf/common.sh@154 -- # true 00:20:36.705 04:04:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:36.705 Cannot find device "nvmf_tgt_br2" 00:20:36.705 04:04:10 -- nvmf/common.sh@155 -- # true 00:20:36.705 04:04:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:36.705 04:04:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:36.705 Cannot find device "nvmf_tgt_br" 00:20:36.705 04:04:10 -- nvmf/common.sh@157 -- # true 00:20:36.705 04:04:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:36.705 Cannot find device "nvmf_tgt_br2" 00:20:36.705 04:04:10 -- nvmf/common.sh@158 -- # true 00:20:36.705 04:04:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:36.705 04:04:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:36.705 04:04:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:36.705 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:36.705 04:04:10 -- nvmf/common.sh@161 -- # true 00:20:36.705 04:04:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:36.705 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:36.705 04:04:10 -- nvmf/common.sh@162 -- # true 00:20:36.705 04:04:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:36.705 04:04:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:36.705 04:04:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:36.705 04:04:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:36.705 04:04:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:36.705 04:04:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:36.705 04:04:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:36.705 04:04:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:36.705 04:04:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:36.705 04:04:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:36.705 04:04:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:36.705 04:04:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:36.705 04:04:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:36.705 04:04:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:36.705 04:04:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:36.705 04:04:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:36.705 04:04:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:36.705 04:04:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:36.705 04:04:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:36.705 04:04:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:36.705 04:04:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:36.705 04:04:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:36.705 04:04:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:36.705 04:04:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:36.705 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:36.705 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:20:36.705 00:20:36.705 --- 10.0.0.2 ping statistics --- 00:20:36.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.705 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:20:36.705 04:04:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:36.705 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:36.705 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:20:36.705 00:20:36.705 --- 10.0.0.3 ping statistics --- 00:20:36.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.705 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:20:36.705 04:04:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:36.705 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:36.705 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:36.705 00:20:36.706 --- 10.0.0.1 ping statistics --- 00:20:36.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.706 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:36.706 04:04:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:36.706 04:04:10 -- nvmf/common.sh@421 -- # return 0 00:20:36.706 04:04:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:36.706 04:04:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:36.706 04:04:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:36.706 04:04:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:36.706 04:04:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:36.706 04:04:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:36.706 04:04:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:36.706 04:04:10 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:36.706 04:04:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:36.706 04:04:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:36.706 04:04:10 -- common/autotest_common.sh@10 -- # set +x 00:20:36.706 04:04:10 -- nvmf/common.sh@469 -- # nvmfpid=82080 00:20:36.706 04:04:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:36.706 04:04:10 -- nvmf/common.sh@470 -- # waitforlisten 82080 00:20:36.706 04:04:10 -- common/autotest_common.sh@829 -- # '[' -z 82080 ']' 00:20:36.706 04:04:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.706 04:04:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:36.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.706 04:04:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.706 04:04:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:36.706 04:04:10 -- common/autotest_common.sh@10 -- # set +x 00:20:36.706 [2024-11-08 04:04:10.956875] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:36.706 [2024-11-08 04:04:10.956988] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.706 [2024-11-08 04:04:11.099903] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:36.706 [2024-11-08 04:04:11.200871] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:36.706 [2024-11-08 04:04:11.201005] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:36.706 [2024-11-08 04:04:11.201016] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:36.706 [2024-11-08 04:04:11.201025] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:36.706 [2024-11-08 04:04:11.201144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.706 [2024-11-08 04:04:11.202028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:36.706 [2024-11-08 04:04:11.202074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.965 04:04:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:36.965 04:04:11 -- common/autotest_common.sh@862 -- # return 0 00:20:36.965 04:04:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:36.965 04:04:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:36.965 04:04:11 -- common/autotest_common.sh@10 -- # set +x 00:20:36.965 04:04:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:36.965 04:04:11 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:36.965 04:04:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.965 04:04:11 -- common/autotest_common.sh@10 -- # set +x 00:20:36.965 [2024-11-08 04:04:11.951183] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.965 04:04:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.965 04:04:11 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:36.965 04:04:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.965 04:04:11 -- common/autotest_common.sh@10 -- # set +x 00:20:36.965 Malloc0 00:20:36.965 04:04:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.965 04:04:11 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:36.965 04:04:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.965 04:04:11 -- common/autotest_common.sh@10 -- # set +x 00:20:36.965 04:04:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.965 04:04:12 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:36.965 04:04:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.965 04:04:12 -- common/autotest_common.sh@10 -- # set +x 00:20:36.965 04:04:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.965 04:04:12 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:36.965 04:04:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.965 04:04:12 -- common/autotest_common.sh@10 -- # set +x 00:20:36.965 [2024-11-08 04:04:12.012854] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:36.965 04:04:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.965 04:04:12 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:36.965 04:04:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.965 04:04:12 -- common/autotest_common.sh@10 -- # set +x 00:20:36.965 [2024-11-08 04:04:12.020774] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:36.965 04:04:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.965 04:04:12 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:36.965 04:04:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.965 04:04:12 -- common/autotest_common.sh@10 -- # set +x 00:20:36.965 Malloc1 00:20:36.965 04:04:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.965 04:04:12 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:36.965 04:04:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.965 04:04:12 -- common/autotest_common.sh@10 -- # set +x 00:20:36.965 04:04:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.965 04:04:12 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:36.965 04:04:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.965 04:04:12 -- common/autotest_common.sh@10 -- # set +x 00:20:36.965 04:04:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.965 04:04:12 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:36.965 04:04:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.965 04:04:12 -- common/autotest_common.sh@10 -- # set +x 00:20:36.965 04:04:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.965 04:04:12 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:37.224 04:04:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.224 04:04:12 -- common/autotest_common.sh@10 -- # set +x 00:20:37.224 04:04:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.224 04:04:12 -- host/multicontroller.sh@44 -- # bdevperf_pid=82137 00:20:37.224 04:04:12 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:37.224 04:04:12 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:37.224 04:04:12 -- host/multicontroller.sh@47 -- # waitforlisten 82137 /var/tmp/bdevperf.sock 00:20:37.224 04:04:12 -- common/autotest_common.sh@829 -- # '[' -z 82137 ']' 00:20:37.224 04:04:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:37.224 04:04:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:37.224 04:04:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:37.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:37.224 04:04:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:37.224 04:04:12 -- common/autotest_common.sh@10 -- # set +x 00:20:38.161 04:04:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:38.161 04:04:13 -- common/autotest_common.sh@862 -- # return 0 00:20:38.161 04:04:13 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:38.161 04:04:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.161 04:04:13 -- common/autotest_common.sh@10 -- # set +x 00:20:38.161 NVMe0n1 00:20:38.161 04:04:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.161 04:04:13 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:38.161 04:04:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.161 04:04:13 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:38.161 04:04:13 -- common/autotest_common.sh@10 -- # set +x 00:20:38.161 04:04:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.161 1 00:20:38.161 04:04:13 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:38.161 04:04:13 -- common/autotest_common.sh@650 -- # local es=0 00:20:38.161 04:04:13 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:38.161 04:04:13 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:38.161 04:04:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:38.161 04:04:13 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:38.161 04:04:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:38.161 04:04:13 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:38.161 04:04:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.161 04:04:13 -- common/autotest_common.sh@10 -- # set +x 00:20:38.161 2024/11/08 04:04:13 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:38.161 request: 00:20:38.161 { 00:20:38.161 "method": "bdev_nvme_attach_controller", 00:20:38.161 "params": { 00:20:38.161 "name": "NVMe0", 00:20:38.161 "trtype": "tcp", 00:20:38.161 "traddr": "10.0.0.2", 00:20:38.161 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:38.161 "hostaddr": "10.0.0.2", 00:20:38.161 "hostsvcid": "60000", 00:20:38.161 "adrfam": "ipv4", 00:20:38.161 "trsvcid": "4420", 00:20:38.161 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:20:38.161 } 00:20:38.161 } 00:20:38.161 Got JSON-RPC error response 00:20:38.161 GoRPCClient: error on JSON-RPC call 00:20:38.161 04:04:13 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:38.161 04:04:13 -- common/autotest_common.sh@653 -- # es=1 00:20:38.161 04:04:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:38.161 04:04:13 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:38.161 04:04:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:38.161 04:04:13 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:38.161 04:04:13 -- common/autotest_common.sh@650 -- # local es=0 00:20:38.161 04:04:13 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:38.161 04:04:13 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:38.161 04:04:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:38.161 04:04:13 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:38.161 04:04:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:38.161 04:04:13 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:38.161 04:04:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.161 04:04:13 -- common/autotest_common.sh@10 -- # set +x 00:20:38.161 2024/11/08 04:04:13 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:38.161 request: 00:20:38.161 { 00:20:38.162 "method": "bdev_nvme_attach_controller", 00:20:38.162 "params": { 00:20:38.162 "name": "NVMe0", 00:20:38.162 "trtype": "tcp", 00:20:38.162 "traddr": "10.0.0.2", 00:20:38.162 "hostaddr": "10.0.0.2", 00:20:38.162 "hostsvcid": "60000", 00:20:38.162 "adrfam": "ipv4", 00:20:38.162 "trsvcid": "4420", 00:20:38.162 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:20:38.162 } 00:20:38.162 } 00:20:38.162 Got JSON-RPC error response 00:20:38.162 GoRPCClient: error on JSON-RPC call 00:20:38.162 04:04:13 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:38.162 04:04:13 -- common/autotest_common.sh@653 -- # es=1 00:20:38.162 04:04:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:38.162 04:04:13 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:38.162 04:04:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:38.162 04:04:13 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:38.162 04:04:13 -- common/autotest_common.sh@650 -- # local es=0 00:20:38.162 04:04:13 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:38.162 04:04:13 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:38.421 04:04:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:38.421 04:04:13 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:38.421 04:04:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:38.421 04:04:13 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:38.421 04:04:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.421 04:04:13 -- common/autotest_common.sh@10 -- # set +x 00:20:38.421 2024/11/08 04:04:13 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:20:38.421 request: 00:20:38.421 { 00:20:38.421 "method": "bdev_nvme_attach_controller", 00:20:38.421 "params": { 00:20:38.421 "name": "NVMe0", 00:20:38.421 "trtype": "tcp", 00:20:38.421 "traddr": "10.0.0.2", 00:20:38.421 "hostaddr": "10.0.0.2", 00:20:38.421 "hostsvcid": "60000", 00:20:38.421 "adrfam": "ipv4", 00:20:38.421 "trsvcid": "4420", 00:20:38.421 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.421 "multipath": "disable" 00:20:38.421 } 00:20:38.421 } 00:20:38.421 Got JSON-RPC error response 00:20:38.421 GoRPCClient: error on JSON-RPC call 00:20:38.421 04:04:13 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:38.421 04:04:13 -- common/autotest_common.sh@653 -- # es=1 00:20:38.421 04:04:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:38.421 04:04:13 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:38.421 04:04:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:38.421 04:04:13 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:38.421 04:04:13 -- common/autotest_common.sh@650 -- # local es=0 00:20:38.421 04:04:13 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:38.421 04:04:13 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:38.421 04:04:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:38.421 04:04:13 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:38.421 04:04:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:38.421 04:04:13 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:38.421 04:04:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.421 04:04:13 -- common/autotest_common.sh@10 -- # set +x 00:20:38.421 2024/11/08 04:04:13 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:38.421 request: 00:20:38.421 { 00:20:38.421 "method": "bdev_nvme_attach_controller", 00:20:38.421 "params": { 00:20:38.421 "name": "NVMe0", 00:20:38.421 "trtype": "tcp", 00:20:38.421 "traddr": "10.0.0.2", 00:20:38.421 "hostaddr": "10.0.0.2", 00:20:38.421 "hostsvcid": "60000", 00:20:38.421 "adrfam": "ipv4", 00:20:38.421 "trsvcid": "4420", 00:20:38.421 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.421 "multipath": "failover" 00:20:38.421 } 00:20:38.421 } 00:20:38.421 Got JSON-RPC error response 00:20:38.421 GoRPCClient: error on JSON-RPC call 00:20:38.421 04:04:13 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:38.421 04:04:13 -- common/autotest_common.sh@653 -- # es=1 00:20:38.421 04:04:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:38.421 04:04:13 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:38.421 04:04:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:38.421 04:04:13 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:38.421 04:04:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.421 04:04:13 -- common/autotest_common.sh@10 -- # set +x 00:20:38.421 00:20:38.421 04:04:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.421 04:04:13 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:38.421 04:04:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.421 04:04:13 -- common/autotest_common.sh@10 -- # set +x 00:20:38.421 04:04:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.421 04:04:13 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:38.421 04:04:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.421 04:04:13 -- common/autotest_common.sh@10 -- # set +x 00:20:38.421 00:20:38.421 04:04:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.421 04:04:13 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:38.421 04:04:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.421 04:04:13 -- common/autotest_common.sh@10 -- # set +x 00:20:38.421 04:04:13 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:38.421 04:04:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.421 04:04:13 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:38.421 04:04:13 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:39.798 0 00:20:39.798 04:04:14 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:39.798 04:04:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.798 04:04:14 -- common/autotest_common.sh@10 -- # set +x 00:20:39.798 04:04:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.798 04:04:14 -- host/multicontroller.sh@100 -- # killprocess 82137 00:20:39.798 04:04:14 -- common/autotest_common.sh@936 -- # '[' -z 82137 ']' 00:20:39.798 04:04:14 -- common/autotest_common.sh@940 -- # kill -0 82137 00:20:39.798 04:04:14 -- common/autotest_common.sh@941 -- # uname 00:20:39.798 04:04:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:39.798 04:04:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82137 00:20:39.798 04:04:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:39.798 04:04:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:39.798 killing process with pid 82137 00:20:39.798 04:04:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82137' 00:20:39.798 04:04:14 -- common/autotest_common.sh@955 -- # kill 82137 00:20:39.798 04:04:14 -- common/autotest_common.sh@960 -- # wait 82137 00:20:40.057 04:04:14 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:40.057 04:04:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.057 04:04:14 -- common/autotest_common.sh@10 -- # set +x 00:20:40.057 04:04:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.057 04:04:14 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:40.057 04:04:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.057 04:04:14 -- common/autotest_common.sh@10 -- # set +x 00:20:40.057 04:04:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.057 04:04:14 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:40.057 04:04:14 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:40.057 04:04:14 -- common/autotest_common.sh@1607 -- # read -r file 00:20:40.057 04:04:14 -- common/autotest_common.sh@1606 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:20:40.057 04:04:14 -- common/autotest_common.sh@1606 -- # sort -u 00:20:40.057 04:04:14 -- common/autotest_common.sh@1608 -- # cat 00:20:40.057 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:40.057 [2024-11-08 04:04:12.141100] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:40.057 [2024-11-08 04:04:12.141215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82137 ] 00:20:40.057 [2024-11-08 04:04:12.281383] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.057 [2024-11-08 04:04:12.384957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.057 [2024-11-08 04:04:13.439805] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name a649b9de-b2c6-4552-b461-b6ddd89ded35 already exists 00:20:40.057 [2024-11-08 04:04:13.439855] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:a649b9de-b2c6-4552-b461-b6ddd89ded35 alias for bdev NVMe1n1 00:20:40.057 [2024-11-08 04:04:13.439879] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:40.057 Running I/O for 1 seconds... 00:20:40.057 00:20:40.057 Latency(us) 00:20:40.057 [2024-11-08T04:04:15.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.057 [2024-11-08T04:04:15.168Z] Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:40.057 NVMe0n1 : 1.00 23313.29 91.07 0.00 0.00 5479.05 3038.49 14656.23 00:20:40.057 [2024-11-08T04:04:15.168Z] =================================================================================================================== 00:20:40.057 [2024-11-08T04:04:15.168Z] Total : 23313.29 91.07 0.00 0.00 5479.05 3038.49 14656.23 00:20:40.057 Received shutdown signal, test time was about 1.000000 seconds 00:20:40.057 00:20:40.057 Latency(us) 00:20:40.057 [2024-11-08T04:04:15.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.057 [2024-11-08T04:04:15.168Z] =================================================================================================================== 00:20:40.057 [2024-11-08T04:04:15.168Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:40.057 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:40.057 04:04:14 -- common/autotest_common.sh@1613 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:40.057 04:04:14 -- common/autotest_common.sh@1607 -- # read -r file 00:20:40.057 04:04:14 -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:40.057 04:04:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:40.057 04:04:14 -- nvmf/common.sh@116 -- # sync 00:20:40.057 04:04:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:40.057 04:04:15 -- nvmf/common.sh@119 -- # set +e 00:20:40.057 04:04:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:40.057 04:04:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:40.057 rmmod nvme_tcp 00:20:40.057 rmmod nvme_fabrics 00:20:40.057 rmmod nvme_keyring 00:20:40.057 04:04:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:40.057 04:04:15 -- nvmf/common.sh@123 -- # set -e 00:20:40.057 04:04:15 -- nvmf/common.sh@124 -- # return 0 00:20:40.057 04:04:15 -- nvmf/common.sh@477 -- # '[' -n 82080 ']' 00:20:40.057 04:04:15 -- nvmf/common.sh@478 -- # killprocess 82080 00:20:40.057 04:04:15 -- common/autotest_common.sh@936 -- # '[' -z 82080 ']' 00:20:40.057 04:04:15 -- common/autotest_common.sh@940 -- # kill -0 82080 00:20:40.057 04:04:15 -- common/autotest_common.sh@941 -- # uname 00:20:40.057 04:04:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:40.057 04:04:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82080 00:20:40.057 killing process with pid 82080 00:20:40.057 04:04:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:40.057 04:04:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:40.057 04:04:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82080' 00:20:40.057 04:04:15 -- common/autotest_common.sh@955 -- # kill 82080 00:20:40.057 04:04:15 -- common/autotest_common.sh@960 -- # wait 82080 00:20:40.624 04:04:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:40.624 04:04:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:40.624 04:04:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:40.624 04:04:15 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:40.624 04:04:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:40.625 04:04:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.625 04:04:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:40.625 04:04:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.625 04:04:15 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:40.625 00:20:40.625 real 0m5.233s 00:20:40.625 user 0m16.121s 00:20:40.625 sys 0m1.146s 00:20:40.625 04:04:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:40.625 04:04:15 -- common/autotest_common.sh@10 -- # set +x 00:20:40.625 ************************************ 00:20:40.625 END TEST nvmf_multicontroller 00:20:40.625 ************************************ 00:20:40.625 04:04:15 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:40.625 04:04:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:40.625 04:04:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:40.625 04:04:15 -- common/autotest_common.sh@10 -- # set +x 00:20:40.625 ************************************ 00:20:40.625 START TEST nvmf_aer 00:20:40.625 ************************************ 00:20:40.625 04:04:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:40.625 * Looking for test storage... 00:20:40.625 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:40.625 04:04:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:40.625 04:04:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:40.625 04:04:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:40.884 04:04:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:40.884 04:04:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:40.884 04:04:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:40.884 04:04:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:40.884 04:04:15 -- scripts/common.sh@335 -- # IFS=.-: 00:20:40.884 04:04:15 -- scripts/common.sh@335 -- # read -ra ver1 00:20:40.884 04:04:15 -- scripts/common.sh@336 -- # IFS=.-: 00:20:40.884 04:04:15 -- scripts/common.sh@336 -- # read -ra ver2 00:20:40.884 04:04:15 -- scripts/common.sh@337 -- # local 'op=<' 00:20:40.884 04:04:15 -- scripts/common.sh@339 -- # ver1_l=2 00:20:40.884 04:04:15 -- scripts/common.sh@340 -- # ver2_l=1 00:20:40.884 04:04:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:40.884 04:04:15 -- scripts/common.sh@343 -- # case "$op" in 00:20:40.884 04:04:15 -- scripts/common.sh@344 -- # : 1 00:20:40.884 04:04:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:40.884 04:04:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:40.884 04:04:15 -- scripts/common.sh@364 -- # decimal 1 00:20:40.884 04:04:15 -- scripts/common.sh@352 -- # local d=1 00:20:40.884 04:04:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:40.884 04:04:15 -- scripts/common.sh@354 -- # echo 1 00:20:40.884 04:04:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:40.884 04:04:15 -- scripts/common.sh@365 -- # decimal 2 00:20:40.884 04:04:15 -- scripts/common.sh@352 -- # local d=2 00:20:40.884 04:04:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:40.884 04:04:15 -- scripts/common.sh@354 -- # echo 2 00:20:40.884 04:04:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:40.884 04:04:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:40.884 04:04:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:40.884 04:04:15 -- scripts/common.sh@367 -- # return 0 00:20:40.884 04:04:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:40.884 04:04:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:40.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.884 --rc genhtml_branch_coverage=1 00:20:40.884 --rc genhtml_function_coverage=1 00:20:40.884 --rc genhtml_legend=1 00:20:40.884 --rc geninfo_all_blocks=1 00:20:40.884 --rc geninfo_unexecuted_blocks=1 00:20:40.884 00:20:40.884 ' 00:20:40.884 04:04:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:40.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.884 --rc genhtml_branch_coverage=1 00:20:40.884 --rc genhtml_function_coverage=1 00:20:40.884 --rc genhtml_legend=1 00:20:40.884 --rc geninfo_all_blocks=1 00:20:40.884 --rc geninfo_unexecuted_blocks=1 00:20:40.884 00:20:40.884 ' 00:20:40.884 04:04:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:40.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.884 --rc genhtml_branch_coverage=1 00:20:40.884 --rc genhtml_function_coverage=1 00:20:40.884 --rc genhtml_legend=1 00:20:40.884 --rc geninfo_all_blocks=1 00:20:40.884 --rc geninfo_unexecuted_blocks=1 00:20:40.884 00:20:40.884 ' 00:20:40.884 04:04:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:40.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.884 --rc genhtml_branch_coverage=1 00:20:40.884 --rc genhtml_function_coverage=1 00:20:40.884 --rc genhtml_legend=1 00:20:40.884 --rc geninfo_all_blocks=1 00:20:40.884 --rc geninfo_unexecuted_blocks=1 00:20:40.884 00:20:40.884 ' 00:20:40.884 04:04:15 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:40.884 04:04:15 -- nvmf/common.sh@7 -- # uname -s 00:20:40.884 04:04:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:40.884 04:04:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:40.884 04:04:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:40.884 04:04:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:40.884 04:04:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:40.884 04:04:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:40.884 04:04:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:40.884 04:04:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:40.884 04:04:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:40.884 04:04:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:40.884 04:04:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:20:40.884 04:04:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:20:40.884 04:04:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:40.884 04:04:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:40.884 04:04:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:40.884 04:04:15 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:40.884 04:04:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:40.884 04:04:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:40.884 04:04:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:40.884 04:04:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.884 04:04:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.884 04:04:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.884 04:04:15 -- paths/export.sh@5 -- # export PATH 00:20:40.884 04:04:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.884 04:04:15 -- nvmf/common.sh@46 -- # : 0 00:20:40.884 04:04:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:40.884 04:04:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:40.884 04:04:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:40.884 04:04:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:40.884 04:04:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:40.884 04:04:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:40.885 04:04:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:40.885 04:04:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:40.885 04:04:15 -- host/aer.sh@11 -- # nvmftestinit 00:20:40.885 04:04:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:40.885 04:04:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:40.885 04:04:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:40.885 04:04:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:40.885 04:04:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:40.885 04:04:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.885 04:04:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:40.885 04:04:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.885 04:04:15 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:40.885 04:04:15 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:40.885 04:04:15 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:40.885 04:04:15 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:40.885 04:04:15 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:40.885 04:04:15 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:40.885 04:04:15 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:40.885 04:04:15 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:40.885 04:04:15 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:40.885 04:04:15 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:40.885 04:04:15 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:40.885 04:04:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:40.885 04:04:15 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:40.885 04:04:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:40.885 04:04:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:40.885 04:04:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:40.885 04:04:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:40.885 04:04:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:40.885 04:04:15 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:40.885 04:04:15 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:40.885 Cannot find device "nvmf_tgt_br" 00:20:40.885 04:04:15 -- nvmf/common.sh@154 -- # true 00:20:40.885 04:04:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:40.885 Cannot find device "nvmf_tgt_br2" 00:20:40.885 04:04:15 -- nvmf/common.sh@155 -- # true 00:20:40.885 04:04:15 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:40.885 04:04:15 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:40.885 Cannot find device "nvmf_tgt_br" 00:20:40.885 04:04:15 -- nvmf/common.sh@157 -- # true 00:20:40.885 04:04:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:40.885 Cannot find device "nvmf_tgt_br2" 00:20:40.885 04:04:15 -- nvmf/common.sh@158 -- # true 00:20:40.885 04:04:15 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:40.885 04:04:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:40.885 04:04:15 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:40.885 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:40.885 04:04:15 -- nvmf/common.sh@161 -- # true 00:20:40.885 04:04:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:40.885 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:40.885 04:04:15 -- nvmf/common.sh@162 -- # true 00:20:40.885 04:04:15 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:40.885 04:04:15 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:40.885 04:04:15 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:40.885 04:04:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:40.885 04:04:15 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:40.885 04:04:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:40.885 04:04:15 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:41.144 04:04:15 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:41.144 04:04:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:41.144 04:04:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:41.144 04:04:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:41.144 04:04:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:41.144 04:04:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:41.144 04:04:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:41.144 04:04:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:41.144 04:04:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:41.144 04:04:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:41.144 04:04:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:41.144 04:04:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:41.144 04:04:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:41.144 04:04:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:41.144 04:04:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:41.144 04:04:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:41.144 04:04:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:41.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:41.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:20:41.144 00:20:41.144 --- 10.0.0.2 ping statistics --- 00:20:41.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.144 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:20:41.144 04:04:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:41.144 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:41.144 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:20:41.144 00:20:41.144 --- 10.0.0.3 ping statistics --- 00:20:41.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.144 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:20:41.144 04:04:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:41.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:41.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:20:41.144 00:20:41.144 --- 10.0.0.1 ping statistics --- 00:20:41.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.144 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:20:41.144 04:04:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:41.144 04:04:16 -- nvmf/common.sh@421 -- # return 0 00:20:41.144 04:04:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:41.144 04:04:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:41.144 04:04:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:41.144 04:04:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:41.144 04:04:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:41.144 04:04:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:41.144 04:04:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:41.144 04:04:16 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:41.144 04:04:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:41.144 04:04:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:41.144 04:04:16 -- common/autotest_common.sh@10 -- # set +x 00:20:41.144 04:04:16 -- nvmf/common.sh@469 -- # nvmfpid=82392 00:20:41.144 04:04:16 -- nvmf/common.sh@470 -- # waitforlisten 82392 00:20:41.144 04:04:16 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:41.144 04:04:16 -- common/autotest_common.sh@829 -- # '[' -z 82392 ']' 00:20:41.144 04:04:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.144 04:04:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:41.144 04:04:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.144 04:04:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:41.144 04:04:16 -- common/autotest_common.sh@10 -- # set +x 00:20:41.144 [2024-11-08 04:04:16.203277] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:41.144 [2024-11-08 04:04:16.203359] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:41.403 [2024-11-08 04:04:16.346301] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:41.403 [2024-11-08 04:04:16.455454] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:41.403 [2024-11-08 04:04:16.455648] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:41.403 [2024-11-08 04:04:16.455667] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:41.403 [2024-11-08 04:04:16.455679] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:41.403 [2024-11-08 04:04:16.455876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:41.403 [2024-11-08 04:04:16.455994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:41.403 [2024-11-08 04:04:16.456206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:41.403 [2024-11-08 04:04:16.456221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.338 04:04:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:42.338 04:04:17 -- common/autotest_common.sh@862 -- # return 0 00:20:42.338 04:04:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:42.338 04:04:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:42.338 04:04:17 -- common/autotest_common.sh@10 -- # set +x 00:20:42.338 04:04:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:42.338 04:04:17 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:42.338 04:04:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.338 04:04:17 -- common/autotest_common.sh@10 -- # set +x 00:20:42.338 [2024-11-08 04:04:17.174309] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:42.338 04:04:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.338 04:04:17 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:42.338 04:04:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.338 04:04:17 -- common/autotest_common.sh@10 -- # set +x 00:20:42.338 Malloc0 00:20:42.338 04:04:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.338 04:04:17 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:42.338 04:04:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.338 04:04:17 -- common/autotest_common.sh@10 -- # set +x 00:20:42.338 04:04:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.338 04:04:17 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:42.338 04:04:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.338 04:04:17 -- common/autotest_common.sh@10 -- # set +x 00:20:42.338 04:04:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.338 04:04:17 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:42.338 04:04:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.338 04:04:17 -- common/autotest_common.sh@10 -- # set +x 00:20:42.338 [2024-11-08 04:04:17.253399] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:42.338 04:04:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.338 04:04:17 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:42.338 04:04:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.338 04:04:17 -- common/autotest_common.sh@10 -- # set +x 00:20:42.338 [2024-11-08 04:04:17.261134] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:42.338 [ 00:20:42.338 { 00:20:42.338 "allow_any_host": true, 00:20:42.338 "hosts": [], 00:20:42.338 "listen_addresses": [], 00:20:42.338 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:42.338 "subtype": "Discovery" 00:20:42.338 }, 00:20:42.338 { 00:20:42.338 "allow_any_host": true, 00:20:42.338 "hosts": [], 00:20:42.338 "listen_addresses": [ 00:20:42.338 { 00:20:42.338 "adrfam": "IPv4", 00:20:42.338 "traddr": "10.0.0.2", 00:20:42.338 "transport": "TCP", 00:20:42.338 "trsvcid": "4420", 00:20:42.338 "trtype": "TCP" 00:20:42.338 } 00:20:42.338 ], 00:20:42.338 "max_cntlid": 65519, 00:20:42.338 "max_namespaces": 2, 00:20:42.338 "min_cntlid": 1, 00:20:42.338 "model_number": "SPDK bdev Controller", 00:20:42.338 "namespaces": [ 00:20:42.338 { 00:20:42.338 "bdev_name": "Malloc0", 00:20:42.338 "name": "Malloc0", 00:20:42.338 "nguid": "2A4040D980144CEDA66944A6B8D0959B", 00:20:42.338 "nsid": 1, 00:20:42.338 "uuid": "2a4040d9-8014-4ced-a669-44a6b8d0959b" 00:20:42.338 } 00:20:42.338 ], 00:20:42.338 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:42.338 "serial_number": "SPDK00000000000001", 00:20:42.338 "subtype": "NVMe" 00:20:42.338 } 00:20:42.338 ] 00:20:42.338 04:04:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.338 04:04:17 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:42.338 04:04:17 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:42.338 04:04:17 -- host/aer.sh@33 -- # aerpid=82452 00:20:42.338 04:04:17 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:42.338 04:04:17 -- common/autotest_common.sh@1254 -- # local i=0 00:20:42.338 04:04:17 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:42.338 04:04:17 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:42.338 04:04:17 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:20:42.338 04:04:17 -- common/autotest_common.sh@1257 -- # i=1 00:20:42.338 04:04:17 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:20:42.338 04:04:17 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:42.338 04:04:17 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:20:42.338 04:04:17 -- common/autotest_common.sh@1257 -- # i=2 00:20:42.338 04:04:17 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:20:42.598 04:04:17 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:42.598 04:04:17 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:42.598 04:04:17 -- common/autotest_common.sh@1265 -- # return 0 00:20:42.598 04:04:17 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:42.598 04:04:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.598 04:04:17 -- common/autotest_common.sh@10 -- # set +x 00:20:42.598 Malloc1 00:20:42.598 04:04:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.598 04:04:17 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:42.598 04:04:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.598 04:04:17 -- common/autotest_common.sh@10 -- # set +x 00:20:42.598 04:04:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.598 04:04:17 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:42.598 04:04:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.598 04:04:17 -- common/autotest_common.sh@10 -- # set +x 00:20:42.598 [ 00:20:42.598 { 00:20:42.598 "allow_any_host": true, 00:20:42.598 "hosts": [], 00:20:42.598 "listen_addresses": [], 00:20:42.598 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:42.598 "subtype": "Discovery" 00:20:42.598 }, 00:20:42.598 { 00:20:42.598 "allow_any_host": true, 00:20:42.598 "hosts": [], 00:20:42.598 "listen_addresses": [ 00:20:42.598 { 00:20:42.598 "adrfam": "IPv4", 00:20:42.598 "traddr": "10.0.0.2", 00:20:42.598 "transport": "TCP", 00:20:42.598 "trsvcid": "4420", 00:20:42.598 "trtype": "TCP" 00:20:42.598 } 00:20:42.598 ], 00:20:42.598 "max_cntlid": 65519, 00:20:42.598 Asynchronous Event Request test 00:20:42.598 Attaching to 10.0.0.2 00:20:42.598 Attached to 10.0.0.2 00:20:42.598 Registering asynchronous event callbacks... 00:20:42.598 Starting namespace attribute notice tests for all controllers... 00:20:42.598 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:42.598 aer_cb - Changed Namespace 00:20:42.598 Cleaning up... 00:20:42.598 "max_namespaces": 2, 00:20:42.598 "min_cntlid": 1, 00:20:42.598 "model_number": "SPDK bdev Controller", 00:20:42.598 "namespaces": [ 00:20:42.598 { 00:20:42.598 "bdev_name": "Malloc0", 00:20:42.598 "name": "Malloc0", 00:20:42.598 "nguid": "2A4040D980144CEDA66944A6B8D0959B", 00:20:42.598 "nsid": 1, 00:20:42.598 "uuid": "2a4040d9-8014-4ced-a669-44a6b8d0959b" 00:20:42.598 }, 00:20:42.598 { 00:20:42.598 "bdev_name": "Malloc1", 00:20:42.598 "name": "Malloc1", 00:20:42.598 "nguid": "C0A01F544B744A928A3850E88F492C97", 00:20:42.598 "nsid": 2, 00:20:42.598 "uuid": "c0a01f54-4b74-4a92-8a38-50e88f492c97" 00:20:42.598 } 00:20:42.598 ], 00:20:42.598 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:42.598 "serial_number": "SPDK00000000000001", 00:20:42.598 "subtype": "NVMe" 00:20:42.598 } 00:20:42.598 ] 00:20:42.598 04:04:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.598 04:04:17 -- host/aer.sh@43 -- # wait 82452 00:20:42.598 04:04:17 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:42.598 04:04:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.598 04:04:17 -- common/autotest_common.sh@10 -- # set +x 00:20:42.598 04:04:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.598 04:04:17 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:42.598 04:04:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.598 04:04:17 -- common/autotest_common.sh@10 -- # set +x 00:20:42.598 04:04:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.598 04:04:17 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:42.598 04:04:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.598 04:04:17 -- common/autotest_common.sh@10 -- # set +x 00:20:42.598 04:04:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.598 04:04:17 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:42.598 04:04:17 -- host/aer.sh@51 -- # nvmftestfini 00:20:42.598 04:04:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:42.598 04:04:17 -- nvmf/common.sh@116 -- # sync 00:20:42.857 04:04:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:42.857 04:04:17 -- nvmf/common.sh@119 -- # set +e 00:20:42.857 04:04:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:42.857 04:04:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:42.857 rmmod nvme_tcp 00:20:42.857 rmmod nvme_fabrics 00:20:42.857 rmmod nvme_keyring 00:20:42.857 04:04:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:42.857 04:04:17 -- nvmf/common.sh@123 -- # set -e 00:20:42.857 04:04:17 -- nvmf/common.sh@124 -- # return 0 00:20:42.857 04:04:17 -- nvmf/common.sh@477 -- # '[' -n 82392 ']' 00:20:42.857 04:04:17 -- nvmf/common.sh@478 -- # killprocess 82392 00:20:42.857 04:04:17 -- common/autotest_common.sh@936 -- # '[' -z 82392 ']' 00:20:42.857 04:04:17 -- common/autotest_common.sh@940 -- # kill -0 82392 00:20:42.857 04:04:17 -- common/autotest_common.sh@941 -- # uname 00:20:42.857 04:04:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:42.857 04:04:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82392 00:20:42.857 killing process with pid 82392 00:20:42.857 04:04:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:42.857 04:04:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:42.857 04:04:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82392' 00:20:42.857 04:04:17 -- common/autotest_common.sh@955 -- # kill 82392 00:20:42.857 [2024-11-08 04:04:17.813537] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:42.857 04:04:17 -- common/autotest_common.sh@960 -- # wait 82392 00:20:43.115 04:04:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:43.115 04:04:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:43.115 04:04:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:43.115 04:04:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:43.115 04:04:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:43.115 04:04:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.115 04:04:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:43.115 04:04:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.115 04:04:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:43.115 00:20:43.115 real 0m2.569s 00:20:43.115 user 0m6.681s 00:20:43.115 sys 0m0.774s 00:20:43.115 04:04:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:43.115 04:04:18 -- common/autotest_common.sh@10 -- # set +x 00:20:43.115 ************************************ 00:20:43.115 END TEST nvmf_aer 00:20:43.115 ************************************ 00:20:43.115 04:04:18 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:43.115 04:04:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:43.115 04:04:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:43.115 04:04:18 -- common/autotest_common.sh@10 -- # set +x 00:20:43.115 ************************************ 00:20:43.115 START TEST nvmf_async_init 00:20:43.115 ************************************ 00:20:43.115 04:04:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:43.375 * Looking for test storage... 00:20:43.375 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:43.375 04:04:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:43.375 04:04:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:43.375 04:04:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:43.375 04:04:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:43.375 04:04:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:43.375 04:04:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:43.375 04:04:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:43.375 04:04:18 -- scripts/common.sh@335 -- # IFS=.-: 00:20:43.375 04:04:18 -- scripts/common.sh@335 -- # read -ra ver1 00:20:43.375 04:04:18 -- scripts/common.sh@336 -- # IFS=.-: 00:20:43.375 04:04:18 -- scripts/common.sh@336 -- # read -ra ver2 00:20:43.375 04:04:18 -- scripts/common.sh@337 -- # local 'op=<' 00:20:43.375 04:04:18 -- scripts/common.sh@339 -- # ver1_l=2 00:20:43.375 04:04:18 -- scripts/common.sh@340 -- # ver2_l=1 00:20:43.375 04:04:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:43.375 04:04:18 -- scripts/common.sh@343 -- # case "$op" in 00:20:43.375 04:04:18 -- scripts/common.sh@344 -- # : 1 00:20:43.375 04:04:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:43.375 04:04:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:43.375 04:04:18 -- scripts/common.sh@364 -- # decimal 1 00:20:43.375 04:04:18 -- scripts/common.sh@352 -- # local d=1 00:20:43.375 04:04:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:43.375 04:04:18 -- scripts/common.sh@354 -- # echo 1 00:20:43.375 04:04:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:43.375 04:04:18 -- scripts/common.sh@365 -- # decimal 2 00:20:43.375 04:04:18 -- scripts/common.sh@352 -- # local d=2 00:20:43.375 04:04:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:43.375 04:04:18 -- scripts/common.sh@354 -- # echo 2 00:20:43.375 04:04:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:43.375 04:04:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:43.375 04:04:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:43.375 04:04:18 -- scripts/common.sh@367 -- # return 0 00:20:43.375 04:04:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:43.375 04:04:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:43.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.375 --rc genhtml_branch_coverage=1 00:20:43.375 --rc genhtml_function_coverage=1 00:20:43.375 --rc genhtml_legend=1 00:20:43.375 --rc geninfo_all_blocks=1 00:20:43.375 --rc geninfo_unexecuted_blocks=1 00:20:43.375 00:20:43.375 ' 00:20:43.375 04:04:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:43.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.375 --rc genhtml_branch_coverage=1 00:20:43.375 --rc genhtml_function_coverage=1 00:20:43.375 --rc genhtml_legend=1 00:20:43.375 --rc geninfo_all_blocks=1 00:20:43.375 --rc geninfo_unexecuted_blocks=1 00:20:43.375 00:20:43.375 ' 00:20:43.375 04:04:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:43.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.375 --rc genhtml_branch_coverage=1 00:20:43.375 --rc genhtml_function_coverage=1 00:20:43.375 --rc genhtml_legend=1 00:20:43.375 --rc geninfo_all_blocks=1 00:20:43.375 --rc geninfo_unexecuted_blocks=1 00:20:43.375 00:20:43.375 ' 00:20:43.375 04:04:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:43.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.375 --rc genhtml_branch_coverage=1 00:20:43.375 --rc genhtml_function_coverage=1 00:20:43.375 --rc genhtml_legend=1 00:20:43.375 --rc geninfo_all_blocks=1 00:20:43.375 --rc geninfo_unexecuted_blocks=1 00:20:43.375 00:20:43.375 ' 00:20:43.375 04:04:18 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:43.375 04:04:18 -- nvmf/common.sh@7 -- # uname -s 00:20:43.375 04:04:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:43.375 04:04:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:43.375 04:04:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:43.375 04:04:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:43.375 04:04:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:43.375 04:04:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:43.375 04:04:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:43.375 04:04:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:43.375 04:04:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:43.375 04:04:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:43.375 04:04:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:20:43.375 04:04:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:20:43.375 04:04:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:43.375 04:04:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:43.375 04:04:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:43.375 04:04:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:43.375 04:04:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:43.375 04:04:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:43.375 04:04:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:43.375 04:04:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.375 04:04:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.375 04:04:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.375 04:04:18 -- paths/export.sh@5 -- # export PATH 00:20:43.375 04:04:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.375 04:04:18 -- nvmf/common.sh@46 -- # : 0 00:20:43.375 04:04:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:43.375 04:04:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:43.375 04:04:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:43.375 04:04:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:43.375 04:04:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:43.375 04:04:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:43.375 04:04:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:43.375 04:04:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:43.375 04:04:18 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:43.375 04:04:18 -- host/async_init.sh@14 -- # null_block_size=512 00:20:43.375 04:04:18 -- host/async_init.sh@15 -- # null_bdev=null0 00:20:43.376 04:04:18 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:43.376 04:04:18 -- host/async_init.sh@20 -- # tr -d - 00:20:43.376 04:04:18 -- host/async_init.sh@20 -- # uuidgen 00:20:43.376 04:04:18 -- host/async_init.sh@20 -- # nguid=a50ae9d94be940beb97989e56b387790 00:20:43.376 04:04:18 -- host/async_init.sh@22 -- # nvmftestinit 00:20:43.376 04:04:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:43.376 04:04:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:43.376 04:04:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:43.376 04:04:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:43.376 04:04:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:43.376 04:04:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.376 04:04:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:43.376 04:04:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.376 04:04:18 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:43.376 04:04:18 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:43.376 04:04:18 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:43.376 04:04:18 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:43.376 04:04:18 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:43.376 04:04:18 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:43.376 04:04:18 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:43.376 04:04:18 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:43.376 04:04:18 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:43.376 04:04:18 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:43.376 04:04:18 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:43.376 04:04:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:43.376 04:04:18 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:43.376 04:04:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:43.376 04:04:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:43.376 04:04:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:43.376 04:04:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:43.376 04:04:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:43.376 04:04:18 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:43.376 04:04:18 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:43.376 Cannot find device "nvmf_tgt_br" 00:20:43.376 04:04:18 -- nvmf/common.sh@154 -- # true 00:20:43.376 04:04:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:43.376 Cannot find device "nvmf_tgt_br2" 00:20:43.376 04:04:18 -- nvmf/common.sh@155 -- # true 00:20:43.376 04:04:18 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:43.376 04:04:18 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:43.376 Cannot find device "nvmf_tgt_br" 00:20:43.376 04:04:18 -- nvmf/common.sh@157 -- # true 00:20:43.376 04:04:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:43.376 Cannot find device "nvmf_tgt_br2" 00:20:43.376 04:04:18 -- nvmf/common.sh@158 -- # true 00:20:43.376 04:04:18 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:43.635 04:04:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:43.635 04:04:18 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:43.635 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:43.635 04:04:18 -- nvmf/common.sh@161 -- # true 00:20:43.635 04:04:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:43.635 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:43.635 04:04:18 -- nvmf/common.sh@162 -- # true 00:20:43.635 04:04:18 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:43.635 04:04:18 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:43.635 04:04:18 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:43.635 04:04:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:43.635 04:04:18 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:43.635 04:04:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:43.635 04:04:18 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:43.635 04:04:18 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:43.635 04:04:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:43.635 04:04:18 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:43.635 04:04:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:43.635 04:04:18 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:43.635 04:04:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:43.635 04:04:18 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:43.635 04:04:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:43.635 04:04:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:43.635 04:04:18 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:43.635 04:04:18 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:43.635 04:04:18 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:43.635 04:04:18 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:43.635 04:04:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:43.635 04:04:18 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:43.635 04:04:18 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:43.635 04:04:18 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:43.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:43.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:20:43.635 00:20:43.635 --- 10.0.0.2 ping statistics --- 00:20:43.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.635 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:20:43.635 04:04:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:43.635 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:43.635 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:20:43.635 00:20:43.635 --- 10.0.0.3 ping statistics --- 00:20:43.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.635 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:20:43.635 04:04:18 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:43.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:43.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:20:43.635 00:20:43.635 --- 10.0.0.1 ping statistics --- 00:20:43.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.635 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:20:43.635 04:04:18 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:43.635 04:04:18 -- nvmf/common.sh@421 -- # return 0 00:20:43.635 04:04:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:43.635 04:04:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:43.635 04:04:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:43.635 04:04:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:43.635 04:04:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:43.635 04:04:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:43.635 04:04:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:43.635 04:04:18 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:43.635 04:04:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:43.635 04:04:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:43.635 04:04:18 -- common/autotest_common.sh@10 -- # set +x 00:20:43.893 04:04:18 -- nvmf/common.sh@469 -- # nvmfpid=82632 00:20:43.893 04:04:18 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:43.893 04:04:18 -- nvmf/common.sh@470 -- # waitforlisten 82632 00:20:43.893 04:04:18 -- common/autotest_common.sh@829 -- # '[' -z 82632 ']' 00:20:43.893 04:04:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.893 04:04:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:43.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.893 04:04:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.893 04:04:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:43.893 04:04:18 -- common/autotest_common.sh@10 -- # set +x 00:20:43.893 [2024-11-08 04:04:18.806824] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:43.894 [2024-11-08 04:04:18.806918] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:43.894 [2024-11-08 04:04:18.946637] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.152 [2024-11-08 04:04:19.033934] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:44.152 [2024-11-08 04:04:19.034069] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.152 [2024-11-08 04:04:19.034082] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.152 [2024-11-08 04:04:19.034090] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.152 [2024-11-08 04:04:19.034121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.717 04:04:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:44.717 04:04:19 -- common/autotest_common.sh@862 -- # return 0 00:20:44.717 04:04:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:44.717 04:04:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:44.717 04:04:19 -- common/autotest_common.sh@10 -- # set +x 00:20:44.974 04:04:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:44.974 04:04:19 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:44.974 04:04:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.974 04:04:19 -- common/autotest_common.sh@10 -- # set +x 00:20:44.974 [2024-11-08 04:04:19.859106] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:44.974 04:04:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.974 04:04:19 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:44.974 04:04:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.974 04:04:19 -- common/autotest_common.sh@10 -- # set +x 00:20:44.974 null0 00:20:44.974 04:04:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.974 04:04:19 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:44.974 04:04:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.974 04:04:19 -- common/autotest_common.sh@10 -- # set +x 00:20:44.974 04:04:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.974 04:04:19 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:44.974 04:04:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.974 04:04:19 -- common/autotest_common.sh@10 -- # set +x 00:20:44.974 04:04:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.974 04:04:19 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g a50ae9d94be940beb97989e56b387790 00:20:44.974 04:04:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.974 04:04:19 -- common/autotest_common.sh@10 -- # set +x 00:20:44.974 04:04:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.974 04:04:19 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:44.974 04:04:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.975 04:04:19 -- common/autotest_common.sh@10 -- # set +x 00:20:44.975 [2024-11-08 04:04:19.903216] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:44.975 04:04:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.975 04:04:19 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:44.975 04:04:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.975 04:04:19 -- common/autotest_common.sh@10 -- # set +x 00:20:45.231 nvme0n1 00:20:45.232 04:04:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.232 04:04:20 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:45.232 04:04:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.232 04:04:20 -- common/autotest_common.sh@10 -- # set +x 00:20:45.232 [ 00:20:45.232 { 00:20:45.232 "aliases": [ 00:20:45.232 "a50ae9d9-4be9-40be-b979-89e56b387790" 00:20:45.232 ], 00:20:45.232 "assigned_rate_limits": { 00:20:45.232 "r_mbytes_per_sec": 0, 00:20:45.232 "rw_ios_per_sec": 0, 00:20:45.232 "rw_mbytes_per_sec": 0, 00:20:45.232 "w_mbytes_per_sec": 0 00:20:45.232 }, 00:20:45.232 "block_size": 512, 00:20:45.232 "claimed": false, 00:20:45.232 "driver_specific": { 00:20:45.232 "mp_policy": "active_passive", 00:20:45.232 "nvme": [ 00:20:45.232 { 00:20:45.232 "ctrlr_data": { 00:20:45.232 "ana_reporting": false, 00:20:45.232 "cntlid": 1, 00:20:45.232 "firmware_revision": "24.01.1", 00:20:45.232 "model_number": "SPDK bdev Controller", 00:20:45.232 "multi_ctrlr": true, 00:20:45.232 "oacs": { 00:20:45.232 "firmware": 0, 00:20:45.232 "format": 0, 00:20:45.232 "ns_manage": 0, 00:20:45.232 "security": 0 00:20:45.232 }, 00:20:45.232 "serial_number": "00000000000000000000", 00:20:45.232 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:45.232 "vendor_id": "0x8086" 00:20:45.232 }, 00:20:45.232 "ns_data": { 00:20:45.232 "can_share": true, 00:20:45.232 "id": 1 00:20:45.232 }, 00:20:45.232 "trid": { 00:20:45.232 "adrfam": "IPv4", 00:20:45.232 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:45.232 "traddr": "10.0.0.2", 00:20:45.232 "trsvcid": "4420", 00:20:45.232 "trtype": "TCP" 00:20:45.232 }, 00:20:45.232 "vs": { 00:20:45.232 "nvme_version": "1.3" 00:20:45.232 } 00:20:45.232 } 00:20:45.232 ] 00:20:45.232 }, 00:20:45.232 "name": "nvme0n1", 00:20:45.232 "num_blocks": 2097152, 00:20:45.232 "product_name": "NVMe disk", 00:20:45.232 "supported_io_types": { 00:20:45.232 "abort": true, 00:20:45.232 "compare": true, 00:20:45.232 "compare_and_write": true, 00:20:45.232 "flush": true, 00:20:45.232 "nvme_admin": true, 00:20:45.232 "nvme_io": true, 00:20:45.232 "read": true, 00:20:45.232 "reset": true, 00:20:45.232 "unmap": false, 00:20:45.232 "write": true, 00:20:45.232 "write_zeroes": true 00:20:45.232 }, 00:20:45.232 "uuid": "a50ae9d9-4be9-40be-b979-89e56b387790", 00:20:45.232 "zoned": false 00:20:45.232 } 00:20:45.232 ] 00:20:45.232 04:04:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.232 04:04:20 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:45.232 04:04:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.232 04:04:20 -- common/autotest_common.sh@10 -- # set +x 00:20:45.232 [2024-11-08 04:04:20.167167] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:45.232 [2024-11-08 04:04:20.167253] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x537f90 (9): Bad file descriptor 00:20:45.232 [2024-11-08 04:04:20.299534] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:45.232 04:04:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.232 04:04:20 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:45.232 04:04:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.232 04:04:20 -- common/autotest_common.sh@10 -- # set +x 00:20:45.232 [ 00:20:45.232 { 00:20:45.232 "aliases": [ 00:20:45.232 "a50ae9d9-4be9-40be-b979-89e56b387790" 00:20:45.232 ], 00:20:45.232 "assigned_rate_limits": { 00:20:45.232 "r_mbytes_per_sec": 0, 00:20:45.232 "rw_ios_per_sec": 0, 00:20:45.232 "rw_mbytes_per_sec": 0, 00:20:45.232 "w_mbytes_per_sec": 0 00:20:45.232 }, 00:20:45.232 "block_size": 512, 00:20:45.232 "claimed": false, 00:20:45.232 "driver_specific": { 00:20:45.232 "mp_policy": "active_passive", 00:20:45.232 "nvme": [ 00:20:45.232 { 00:20:45.232 "ctrlr_data": { 00:20:45.232 "ana_reporting": false, 00:20:45.232 "cntlid": 2, 00:20:45.232 "firmware_revision": "24.01.1", 00:20:45.232 "model_number": "SPDK bdev Controller", 00:20:45.232 "multi_ctrlr": true, 00:20:45.232 "oacs": { 00:20:45.232 "firmware": 0, 00:20:45.232 "format": 0, 00:20:45.232 "ns_manage": 0, 00:20:45.232 "security": 0 00:20:45.232 }, 00:20:45.232 "serial_number": "00000000000000000000", 00:20:45.232 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:45.232 "vendor_id": "0x8086" 00:20:45.232 }, 00:20:45.232 "ns_data": { 00:20:45.232 "can_share": true, 00:20:45.232 "id": 1 00:20:45.232 }, 00:20:45.232 "trid": { 00:20:45.232 "adrfam": "IPv4", 00:20:45.232 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:45.232 "traddr": "10.0.0.2", 00:20:45.232 "trsvcid": "4420", 00:20:45.232 "trtype": "TCP" 00:20:45.232 }, 00:20:45.232 "vs": { 00:20:45.232 "nvme_version": "1.3" 00:20:45.232 } 00:20:45.232 } 00:20:45.232 ] 00:20:45.232 }, 00:20:45.232 "name": "nvme0n1", 00:20:45.232 "num_blocks": 2097152, 00:20:45.232 "product_name": "NVMe disk", 00:20:45.232 "supported_io_types": { 00:20:45.232 "abort": true, 00:20:45.232 "compare": true, 00:20:45.232 "compare_and_write": true, 00:20:45.232 "flush": true, 00:20:45.232 "nvme_admin": true, 00:20:45.232 "nvme_io": true, 00:20:45.232 "read": true, 00:20:45.232 "reset": true, 00:20:45.232 "unmap": false, 00:20:45.232 "write": true, 00:20:45.232 "write_zeroes": true 00:20:45.232 }, 00:20:45.232 "uuid": "a50ae9d9-4be9-40be-b979-89e56b387790", 00:20:45.232 "zoned": false 00:20:45.232 } 00:20:45.232 ] 00:20:45.232 04:04:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.232 04:04:20 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.232 04:04:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.232 04:04:20 -- common/autotest_common.sh@10 -- # set +x 00:20:45.489 04:04:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.489 04:04:20 -- host/async_init.sh@53 -- # mktemp 00:20:45.489 04:04:20 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.sCHSYQUT0C 00:20:45.489 04:04:20 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:45.489 04:04:20 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.sCHSYQUT0C 00:20:45.489 04:04:20 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:45.489 04:04:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.489 04:04:20 -- common/autotest_common.sh@10 -- # set +x 00:20:45.489 04:04:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.489 04:04:20 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:45.489 04:04:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.489 04:04:20 -- common/autotest_common.sh@10 -- # set +x 00:20:45.489 [2024-11-08 04:04:20.367298] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:45.489 [2024-11-08 04:04:20.367448] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:45.489 04:04:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.489 04:04:20 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sCHSYQUT0C 00:20:45.489 04:04:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.489 04:04:20 -- common/autotest_common.sh@10 -- # set +x 00:20:45.489 04:04:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.489 04:04:20 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sCHSYQUT0C 00:20:45.489 04:04:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.489 04:04:20 -- common/autotest_common.sh@10 -- # set +x 00:20:45.489 [2024-11-08 04:04:20.383292] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:45.489 nvme0n1 00:20:45.489 04:04:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.489 04:04:20 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:45.489 04:04:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.489 04:04:20 -- common/autotest_common.sh@10 -- # set +x 00:20:45.489 [ 00:20:45.489 { 00:20:45.489 "aliases": [ 00:20:45.489 "a50ae9d9-4be9-40be-b979-89e56b387790" 00:20:45.489 ], 00:20:45.489 "assigned_rate_limits": { 00:20:45.489 "r_mbytes_per_sec": 0, 00:20:45.489 "rw_ios_per_sec": 0, 00:20:45.489 "rw_mbytes_per_sec": 0, 00:20:45.489 "w_mbytes_per_sec": 0 00:20:45.489 }, 00:20:45.489 "block_size": 512, 00:20:45.489 "claimed": false, 00:20:45.489 "driver_specific": { 00:20:45.489 "mp_policy": "active_passive", 00:20:45.489 "nvme": [ 00:20:45.489 { 00:20:45.489 "ctrlr_data": { 00:20:45.489 "ana_reporting": false, 00:20:45.489 "cntlid": 3, 00:20:45.489 "firmware_revision": "24.01.1", 00:20:45.489 "model_number": "SPDK bdev Controller", 00:20:45.489 "multi_ctrlr": true, 00:20:45.489 "oacs": { 00:20:45.489 "firmware": 0, 00:20:45.489 "format": 0, 00:20:45.489 "ns_manage": 0, 00:20:45.489 "security": 0 00:20:45.489 }, 00:20:45.489 "serial_number": "00000000000000000000", 00:20:45.489 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:45.489 "vendor_id": "0x8086" 00:20:45.489 }, 00:20:45.489 "ns_data": { 00:20:45.489 "can_share": true, 00:20:45.489 "id": 1 00:20:45.489 }, 00:20:45.489 "trid": { 00:20:45.489 "adrfam": "IPv4", 00:20:45.489 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:45.489 "traddr": "10.0.0.2", 00:20:45.489 "trsvcid": "4421", 00:20:45.489 "trtype": "TCP" 00:20:45.489 }, 00:20:45.489 "vs": { 00:20:45.489 "nvme_version": "1.3" 00:20:45.489 } 00:20:45.489 } 00:20:45.489 ] 00:20:45.489 }, 00:20:45.489 "name": "nvme0n1", 00:20:45.489 "num_blocks": 2097152, 00:20:45.489 "product_name": "NVMe disk", 00:20:45.489 "supported_io_types": { 00:20:45.489 "abort": true, 00:20:45.489 "compare": true, 00:20:45.489 "compare_and_write": true, 00:20:45.489 "flush": true, 00:20:45.489 "nvme_admin": true, 00:20:45.489 "nvme_io": true, 00:20:45.489 "read": true, 00:20:45.489 "reset": true, 00:20:45.489 "unmap": false, 00:20:45.489 "write": true, 00:20:45.489 "write_zeroes": true 00:20:45.489 }, 00:20:45.489 "uuid": "a50ae9d9-4be9-40be-b979-89e56b387790", 00:20:45.489 "zoned": false 00:20:45.489 } 00:20:45.489 ] 00:20:45.489 04:04:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.489 04:04:20 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.489 04:04:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.489 04:04:20 -- common/autotest_common.sh@10 -- # set +x 00:20:45.489 04:04:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.489 04:04:20 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.sCHSYQUT0C 00:20:45.489 04:04:20 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:45.489 04:04:20 -- host/async_init.sh@78 -- # nvmftestfini 00:20:45.489 04:04:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:45.489 04:04:20 -- nvmf/common.sh@116 -- # sync 00:20:45.489 04:04:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:45.489 04:04:20 -- nvmf/common.sh@119 -- # set +e 00:20:45.489 04:04:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:45.489 04:04:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:45.489 rmmod nvme_tcp 00:20:45.489 rmmod nvme_fabrics 00:20:45.747 rmmod nvme_keyring 00:20:45.747 04:04:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:45.747 04:04:20 -- nvmf/common.sh@123 -- # set -e 00:20:45.747 04:04:20 -- nvmf/common.sh@124 -- # return 0 00:20:45.747 04:04:20 -- nvmf/common.sh@477 -- # '[' -n 82632 ']' 00:20:45.747 04:04:20 -- nvmf/common.sh@478 -- # killprocess 82632 00:20:45.747 04:04:20 -- common/autotest_common.sh@936 -- # '[' -z 82632 ']' 00:20:45.747 04:04:20 -- common/autotest_common.sh@940 -- # kill -0 82632 00:20:45.747 04:04:20 -- common/autotest_common.sh@941 -- # uname 00:20:45.747 04:04:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:45.747 04:04:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82632 00:20:45.747 04:04:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:45.747 04:04:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:45.747 killing process with pid 82632 00:20:45.747 04:04:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82632' 00:20:45.747 04:04:20 -- common/autotest_common.sh@955 -- # kill 82632 00:20:45.747 04:04:20 -- common/autotest_common.sh@960 -- # wait 82632 00:20:46.006 04:04:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:46.006 04:04:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:46.006 04:04:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:46.006 04:04:20 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:46.006 04:04:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:46.006 04:04:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.006 04:04:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:46.006 04:04:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.006 04:04:20 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:46.006 00:20:46.006 real 0m2.772s 00:20:46.006 user 0m2.621s 00:20:46.006 sys 0m0.654s 00:20:46.006 04:04:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:46.006 04:04:20 -- common/autotest_common.sh@10 -- # set +x 00:20:46.006 ************************************ 00:20:46.006 END TEST nvmf_async_init 00:20:46.006 ************************************ 00:20:46.006 04:04:21 -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:46.006 04:04:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:46.006 04:04:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:46.006 04:04:21 -- common/autotest_common.sh@10 -- # set +x 00:20:46.006 ************************************ 00:20:46.006 START TEST dma 00:20:46.006 ************************************ 00:20:46.006 04:04:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:46.274 * Looking for test storage... 00:20:46.274 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:46.274 04:04:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:46.274 04:04:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:46.274 04:04:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:46.274 04:04:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:46.274 04:04:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:46.274 04:04:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:46.274 04:04:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:46.274 04:04:21 -- scripts/common.sh@335 -- # IFS=.-: 00:20:46.274 04:04:21 -- scripts/common.sh@335 -- # read -ra ver1 00:20:46.274 04:04:21 -- scripts/common.sh@336 -- # IFS=.-: 00:20:46.274 04:04:21 -- scripts/common.sh@336 -- # read -ra ver2 00:20:46.274 04:04:21 -- scripts/common.sh@337 -- # local 'op=<' 00:20:46.274 04:04:21 -- scripts/common.sh@339 -- # ver1_l=2 00:20:46.274 04:04:21 -- scripts/common.sh@340 -- # ver2_l=1 00:20:46.274 04:04:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:46.274 04:04:21 -- scripts/common.sh@343 -- # case "$op" in 00:20:46.274 04:04:21 -- scripts/common.sh@344 -- # : 1 00:20:46.274 04:04:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:46.274 04:04:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:46.275 04:04:21 -- scripts/common.sh@364 -- # decimal 1 00:20:46.275 04:04:21 -- scripts/common.sh@352 -- # local d=1 00:20:46.275 04:04:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:46.275 04:04:21 -- scripts/common.sh@354 -- # echo 1 00:20:46.275 04:04:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:46.275 04:04:21 -- scripts/common.sh@365 -- # decimal 2 00:20:46.275 04:04:21 -- scripts/common.sh@352 -- # local d=2 00:20:46.275 04:04:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:46.275 04:04:21 -- scripts/common.sh@354 -- # echo 2 00:20:46.275 04:04:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:46.275 04:04:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:46.275 04:04:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:46.275 04:04:21 -- scripts/common.sh@367 -- # return 0 00:20:46.275 04:04:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:46.275 04:04:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:46.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.275 --rc genhtml_branch_coverage=1 00:20:46.275 --rc genhtml_function_coverage=1 00:20:46.275 --rc genhtml_legend=1 00:20:46.275 --rc geninfo_all_blocks=1 00:20:46.275 --rc geninfo_unexecuted_blocks=1 00:20:46.275 00:20:46.275 ' 00:20:46.275 04:04:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:46.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.275 --rc genhtml_branch_coverage=1 00:20:46.275 --rc genhtml_function_coverage=1 00:20:46.275 --rc genhtml_legend=1 00:20:46.275 --rc geninfo_all_blocks=1 00:20:46.275 --rc geninfo_unexecuted_blocks=1 00:20:46.275 00:20:46.275 ' 00:20:46.275 04:04:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:46.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.275 --rc genhtml_branch_coverage=1 00:20:46.275 --rc genhtml_function_coverage=1 00:20:46.275 --rc genhtml_legend=1 00:20:46.275 --rc geninfo_all_blocks=1 00:20:46.275 --rc geninfo_unexecuted_blocks=1 00:20:46.275 00:20:46.275 ' 00:20:46.275 04:04:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:46.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.275 --rc genhtml_branch_coverage=1 00:20:46.275 --rc genhtml_function_coverage=1 00:20:46.275 --rc genhtml_legend=1 00:20:46.275 --rc geninfo_all_blocks=1 00:20:46.275 --rc geninfo_unexecuted_blocks=1 00:20:46.275 00:20:46.275 ' 00:20:46.275 04:04:21 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:46.275 04:04:21 -- nvmf/common.sh@7 -- # uname -s 00:20:46.275 04:04:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:46.275 04:04:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:46.275 04:04:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:46.275 04:04:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:46.275 04:04:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:46.275 04:04:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:46.275 04:04:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:46.275 04:04:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:46.275 04:04:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:46.275 04:04:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:46.275 04:04:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:20:46.275 04:04:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:20:46.275 04:04:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:46.275 04:04:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:46.275 04:04:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:46.275 04:04:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:46.275 04:04:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:46.275 04:04:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:46.275 04:04:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:46.275 04:04:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.275 04:04:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.275 04:04:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.275 04:04:21 -- paths/export.sh@5 -- # export PATH 00:20:46.275 04:04:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.275 04:04:21 -- nvmf/common.sh@46 -- # : 0 00:20:46.275 04:04:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:46.275 04:04:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:46.275 04:04:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:46.275 04:04:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:46.275 04:04:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:46.275 04:04:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:46.275 04:04:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:46.275 04:04:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:46.275 04:04:21 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:46.275 04:04:21 -- host/dma.sh@13 -- # exit 0 00:20:46.275 00:20:46.275 real 0m0.215s 00:20:46.275 user 0m0.124s 00:20:46.275 sys 0m0.103s 00:20:46.275 04:04:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:46.275 ************************************ 00:20:46.275 END TEST dma 00:20:46.275 04:04:21 -- common/autotest_common.sh@10 -- # set +x 00:20:46.275 ************************************ 00:20:46.275 04:04:21 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:46.275 04:04:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:46.275 04:04:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:46.275 04:04:21 -- common/autotest_common.sh@10 -- # set +x 00:20:46.275 ************************************ 00:20:46.275 START TEST nvmf_identify 00:20:46.275 ************************************ 00:20:46.275 04:04:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:46.565 * Looking for test storage... 00:20:46.565 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:46.565 04:04:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:46.565 04:04:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:46.565 04:04:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:46.565 04:04:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:46.565 04:04:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:46.565 04:04:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:46.565 04:04:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:46.565 04:04:21 -- scripts/common.sh@335 -- # IFS=.-: 00:20:46.565 04:04:21 -- scripts/common.sh@335 -- # read -ra ver1 00:20:46.565 04:04:21 -- scripts/common.sh@336 -- # IFS=.-: 00:20:46.565 04:04:21 -- scripts/common.sh@336 -- # read -ra ver2 00:20:46.565 04:04:21 -- scripts/common.sh@337 -- # local 'op=<' 00:20:46.565 04:04:21 -- scripts/common.sh@339 -- # ver1_l=2 00:20:46.565 04:04:21 -- scripts/common.sh@340 -- # ver2_l=1 00:20:46.565 04:04:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:46.565 04:04:21 -- scripts/common.sh@343 -- # case "$op" in 00:20:46.565 04:04:21 -- scripts/common.sh@344 -- # : 1 00:20:46.565 04:04:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:46.565 04:04:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:46.565 04:04:21 -- scripts/common.sh@364 -- # decimal 1 00:20:46.565 04:04:21 -- scripts/common.sh@352 -- # local d=1 00:20:46.565 04:04:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:46.565 04:04:21 -- scripts/common.sh@354 -- # echo 1 00:20:46.565 04:04:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:46.565 04:04:21 -- scripts/common.sh@365 -- # decimal 2 00:20:46.565 04:04:21 -- scripts/common.sh@352 -- # local d=2 00:20:46.565 04:04:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:46.565 04:04:21 -- scripts/common.sh@354 -- # echo 2 00:20:46.565 04:04:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:46.565 04:04:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:46.565 04:04:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:46.565 04:04:21 -- scripts/common.sh@367 -- # return 0 00:20:46.565 04:04:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:46.565 04:04:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:46.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.565 --rc genhtml_branch_coverage=1 00:20:46.565 --rc genhtml_function_coverage=1 00:20:46.565 --rc genhtml_legend=1 00:20:46.565 --rc geninfo_all_blocks=1 00:20:46.565 --rc geninfo_unexecuted_blocks=1 00:20:46.565 00:20:46.565 ' 00:20:46.565 04:04:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:46.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.565 --rc genhtml_branch_coverage=1 00:20:46.565 --rc genhtml_function_coverage=1 00:20:46.565 --rc genhtml_legend=1 00:20:46.565 --rc geninfo_all_blocks=1 00:20:46.565 --rc geninfo_unexecuted_blocks=1 00:20:46.565 00:20:46.565 ' 00:20:46.565 04:04:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:46.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.565 --rc genhtml_branch_coverage=1 00:20:46.565 --rc genhtml_function_coverage=1 00:20:46.565 --rc genhtml_legend=1 00:20:46.565 --rc geninfo_all_blocks=1 00:20:46.565 --rc geninfo_unexecuted_blocks=1 00:20:46.565 00:20:46.565 ' 00:20:46.565 04:04:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:46.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.565 --rc genhtml_branch_coverage=1 00:20:46.565 --rc genhtml_function_coverage=1 00:20:46.565 --rc genhtml_legend=1 00:20:46.565 --rc geninfo_all_blocks=1 00:20:46.565 --rc geninfo_unexecuted_blocks=1 00:20:46.565 00:20:46.565 ' 00:20:46.565 04:04:21 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:46.565 04:04:21 -- nvmf/common.sh@7 -- # uname -s 00:20:46.565 04:04:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:46.565 04:04:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:46.565 04:04:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:46.565 04:04:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:46.565 04:04:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:46.565 04:04:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:46.565 04:04:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:46.565 04:04:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:46.565 04:04:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:46.565 04:04:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:46.565 04:04:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:20:46.565 04:04:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:20:46.565 04:04:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:46.565 04:04:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:46.565 04:04:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:46.565 04:04:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:46.565 04:04:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:46.565 04:04:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:46.565 04:04:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:46.565 04:04:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.565 04:04:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.565 04:04:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.565 04:04:21 -- paths/export.sh@5 -- # export PATH 00:20:46.565 04:04:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.565 04:04:21 -- nvmf/common.sh@46 -- # : 0 00:20:46.566 04:04:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:46.566 04:04:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:46.566 04:04:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:46.566 04:04:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:46.566 04:04:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:46.566 04:04:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:46.566 04:04:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:46.566 04:04:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:46.566 04:04:21 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:46.566 04:04:21 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:46.566 04:04:21 -- host/identify.sh@14 -- # nvmftestinit 00:20:46.566 04:04:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:46.566 04:04:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:46.566 04:04:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:46.566 04:04:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:46.566 04:04:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:46.566 04:04:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.566 04:04:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:46.566 04:04:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.566 04:04:21 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:46.566 04:04:21 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:46.566 04:04:21 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:46.566 04:04:21 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:46.566 04:04:21 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:46.566 04:04:21 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:46.566 04:04:21 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:46.566 04:04:21 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:46.566 04:04:21 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:46.566 04:04:21 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:46.566 04:04:21 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:46.566 04:04:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:46.566 04:04:21 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:46.566 04:04:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:46.566 04:04:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:46.566 04:04:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:46.566 04:04:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:46.566 04:04:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:46.566 04:04:21 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:46.566 04:04:21 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:46.566 Cannot find device "nvmf_tgt_br" 00:20:46.566 04:04:21 -- nvmf/common.sh@154 -- # true 00:20:46.566 04:04:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:46.566 Cannot find device "nvmf_tgt_br2" 00:20:46.566 04:04:21 -- nvmf/common.sh@155 -- # true 00:20:46.566 04:04:21 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:46.566 04:04:21 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:46.566 Cannot find device "nvmf_tgt_br" 00:20:46.566 04:04:21 -- nvmf/common.sh@157 -- # true 00:20:46.566 04:04:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:46.566 Cannot find device "nvmf_tgt_br2" 00:20:46.566 04:04:21 -- nvmf/common.sh@158 -- # true 00:20:46.566 04:04:21 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:46.566 04:04:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:46.566 04:04:21 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:46.566 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:46.566 04:04:21 -- nvmf/common.sh@161 -- # true 00:20:46.566 04:04:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:46.566 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:46.566 04:04:21 -- nvmf/common.sh@162 -- # true 00:20:46.566 04:04:21 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:46.566 04:04:21 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:46.835 04:04:21 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:46.835 04:04:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:46.835 04:04:21 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:46.835 04:04:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:46.835 04:04:21 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:46.835 04:04:21 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:46.835 04:04:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:46.835 04:04:21 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:46.835 04:04:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:46.835 04:04:21 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:46.835 04:04:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:46.835 04:04:21 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:46.835 04:04:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:46.835 04:04:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:46.835 04:04:21 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:46.835 04:04:21 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:46.835 04:04:21 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:46.835 04:04:21 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:46.835 04:04:21 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:46.835 04:04:21 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:46.835 04:04:21 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:46.835 04:04:21 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:46.835 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:46.835 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:20:46.835 00:20:46.835 --- 10.0.0.2 ping statistics --- 00:20:46.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.835 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:20:46.835 04:04:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:46.835 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:46.835 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:20:46.835 00:20:46.835 --- 10.0.0.3 ping statistics --- 00:20:46.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.835 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:20:46.835 04:04:21 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:46.835 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:46.835 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:46.835 00:20:46.835 --- 10.0.0.1 ping statistics --- 00:20:46.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.835 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:46.835 04:04:21 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:46.835 04:04:21 -- nvmf/common.sh@421 -- # return 0 00:20:46.835 04:04:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:46.835 04:04:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:46.835 04:04:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:46.835 04:04:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:46.835 04:04:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:46.835 04:04:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:46.835 04:04:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:46.835 04:04:21 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:46.835 04:04:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:46.835 04:04:21 -- common/autotest_common.sh@10 -- # set +x 00:20:46.835 04:04:21 -- host/identify.sh@19 -- # nvmfpid=82920 00:20:46.835 04:04:21 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:46.835 04:04:21 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:46.835 04:04:21 -- host/identify.sh@23 -- # waitforlisten 82920 00:20:46.835 04:04:21 -- common/autotest_common.sh@829 -- # '[' -z 82920 ']' 00:20:46.835 04:04:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.835 04:04:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:46.835 04:04:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.835 04:04:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:46.835 04:04:21 -- common/autotest_common.sh@10 -- # set +x 00:20:47.094 [2024-11-08 04:04:21.974225] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:47.094 [2024-11-08 04:04:21.974306] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.094 [2024-11-08 04:04:22.118921] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:47.352 [2024-11-08 04:04:22.234191] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:47.352 [2024-11-08 04:04:22.234377] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:47.352 [2024-11-08 04:04:22.234395] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:47.352 [2024-11-08 04:04:22.234406] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:47.352 [2024-11-08 04:04:22.234581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.352 [2024-11-08 04:04:22.235240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.352 [2024-11-08 04:04:22.235148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.352 [2024-11-08 04:04:22.235235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:47.918 04:04:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:47.918 04:04:22 -- common/autotest_common.sh@862 -- # return 0 00:20:47.918 04:04:22 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:47.918 04:04:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.918 04:04:22 -- common/autotest_common.sh@10 -- # set +x 00:20:47.918 [2024-11-08 04:04:23.005488] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:47.918 04:04:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.918 04:04:23 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:47.918 04:04:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:47.918 04:04:23 -- common/autotest_common.sh@10 -- # set +x 00:20:48.176 04:04:23 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:48.176 04:04:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.176 04:04:23 -- common/autotest_common.sh@10 -- # set +x 00:20:48.176 Malloc0 00:20:48.176 04:04:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.176 04:04:23 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:48.176 04:04:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.176 04:04:23 -- common/autotest_common.sh@10 -- # set +x 00:20:48.176 04:04:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.176 04:04:23 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:48.176 04:04:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.176 04:04:23 -- common/autotest_common.sh@10 -- # set +x 00:20:48.176 04:04:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.176 04:04:23 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:48.176 04:04:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.176 04:04:23 -- common/autotest_common.sh@10 -- # set +x 00:20:48.176 [2024-11-08 04:04:23.123155] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:48.176 04:04:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.176 04:04:23 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:48.176 04:04:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.176 04:04:23 -- common/autotest_common.sh@10 -- # set +x 00:20:48.176 04:04:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.176 04:04:23 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:48.176 04:04:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.176 04:04:23 -- common/autotest_common.sh@10 -- # set +x 00:20:48.176 [2024-11-08 04:04:23.138935] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:48.176 [ 00:20:48.176 { 00:20:48.176 "allow_any_host": true, 00:20:48.176 "hosts": [], 00:20:48.176 "listen_addresses": [ 00:20:48.176 { 00:20:48.176 "adrfam": "IPv4", 00:20:48.176 "traddr": "10.0.0.2", 00:20:48.176 "transport": "TCP", 00:20:48.176 "trsvcid": "4420", 00:20:48.176 "trtype": "TCP" 00:20:48.176 } 00:20:48.176 ], 00:20:48.176 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:48.176 "subtype": "Discovery" 00:20:48.176 }, 00:20:48.176 { 00:20:48.176 "allow_any_host": true, 00:20:48.176 "hosts": [], 00:20:48.176 "listen_addresses": [ 00:20:48.176 { 00:20:48.176 "adrfam": "IPv4", 00:20:48.176 "traddr": "10.0.0.2", 00:20:48.176 "transport": "TCP", 00:20:48.176 "trsvcid": "4420", 00:20:48.176 "trtype": "TCP" 00:20:48.176 } 00:20:48.176 ], 00:20:48.176 "max_cntlid": 65519, 00:20:48.176 "max_namespaces": 32, 00:20:48.176 "min_cntlid": 1, 00:20:48.176 "model_number": "SPDK bdev Controller", 00:20:48.176 "namespaces": [ 00:20:48.176 { 00:20:48.176 "bdev_name": "Malloc0", 00:20:48.176 "eui64": "ABCDEF0123456789", 00:20:48.176 "name": "Malloc0", 00:20:48.176 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:48.176 "nsid": 1, 00:20:48.176 "uuid": "bd7e30c0-c074-4703-8fe6-39d9f9bf5cf4" 00:20:48.176 } 00:20:48.176 ], 00:20:48.176 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.176 "serial_number": "SPDK00000000000001", 00:20:48.176 "subtype": "NVMe" 00:20:48.176 } 00:20:48.176 ] 00:20:48.176 04:04:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.176 04:04:23 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:48.176 [2024-11-08 04:04:23.174593] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:48.176 [2024-11-08 04:04:23.174642] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82973 ] 00:20:48.438 [2024-11-08 04:04:23.307730] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:48.438 [2024-11-08 04:04:23.307815] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:48.438 [2024-11-08 04:04:23.307822] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:48.438 [2024-11-08 04:04:23.307832] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:48.438 [2024-11-08 04:04:23.307842] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:48.438 [2024-11-08 04:04:23.307970] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:48.438 [2024-11-08 04:04:23.308025] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1e30d30 0 00:20:48.438 [2024-11-08 04:04:23.320481] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:48.438 [2024-11-08 04:04:23.320500] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:48.438 [2024-11-08 04:04:23.320506] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:48.438 [2024-11-08 04:04:23.320509] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:48.438 [2024-11-08 04:04:23.320558] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.438 [2024-11-08 04:04:23.320565] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.438 [2024-11-08 04:04:23.320569] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e30d30) 00:20:48.438 [2024-11-08 04:04:23.320584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:48.438 [2024-11-08 04:04:23.320613] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8ef30, cid 0, qid 0 00:20:48.438 [2024-11-08 04:04:23.328512] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.438 [2024-11-08 04:04:23.328530] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.438 [2024-11-08 04:04:23.328535] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.438 [2024-11-08 04:04:23.328540] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8ef30) on tqpair=0x1e30d30 00:20:48.438 [2024-11-08 04:04:23.328556] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:48.438 [2024-11-08 04:04:23.328564] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:48.438 [2024-11-08 04:04:23.328570] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:48.438 [2024-11-08 04:04:23.328586] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.438 [2024-11-08 04:04:23.328591] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.438 [2024-11-08 04:04:23.328594] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e30d30) 00:20:48.438 [2024-11-08 04:04:23.328602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.438 [2024-11-08 04:04:23.328629] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8ef30, cid 0, qid 0 00:20:48.438 [2024-11-08 04:04:23.328739] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.438 [2024-11-08 04:04:23.328746] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.438 [2024-11-08 04:04:23.328749] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.438 [2024-11-08 04:04:23.328753] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8ef30) on tqpair=0x1e30d30 00:20:48.438 [2024-11-08 04:04:23.328760] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:48.438 [2024-11-08 04:04:23.328767] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:48.438 [2024-11-08 04:04:23.328774] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.438 [2024-11-08 04:04:23.328778] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.438 [2024-11-08 04:04:23.328782] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e30d30) 00:20:48.438 [2024-11-08 04:04:23.328789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.438 [2024-11-08 04:04:23.328807] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8ef30, cid 0, qid 0 00:20:48.438 [2024-11-08 04:04:23.328908] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.438 [2024-11-08 04:04:23.328914] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.438 [2024-11-08 04:04:23.328918] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.438 [2024-11-08 04:04:23.328921] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8ef30) on tqpair=0x1e30d30 00:20:48.438 [2024-11-08 04:04:23.328928] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:48.438 [2024-11-08 04:04:23.328936] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:48.438 [2024-11-08 04:04:23.328943] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.438 [2024-11-08 04:04:23.328946] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.438 [2024-11-08 04:04:23.328951] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e30d30) 00:20:48.438 [2024-11-08 04:04:23.328958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.438 [2024-11-08 04:04:23.328974] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8ef30, cid 0, qid 0 00:20:48.438 [2024-11-08 04:04:23.329037] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.438 [2024-11-08 04:04:23.329043] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.438 [2024-11-08 04:04:23.329046] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.438 [2024-11-08 04:04:23.329050] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8ef30) on tqpair=0x1e30d30 00:20:48.438 [2024-11-08 04:04:23.329056] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:48.438 [2024-11-08 04:04:23.329066] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.438 [2024-11-08 04:04:23.329070] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.438 [2024-11-08 04:04:23.329074] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e30d30) 00:20:48.438 [2024-11-08 04:04:23.329080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.438 [2024-11-08 04:04:23.329096] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8ef30, cid 0, qid 0 00:20:48.438 [2024-11-08 04:04:23.329160] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.438 [2024-11-08 04:04:23.329166] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.438 [2024-11-08 04:04:23.329169] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.438 [2024-11-08 04:04:23.329172] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8ef30) on tqpair=0x1e30d30 00:20:48.438 [2024-11-08 04:04:23.329178] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:48.438 [2024-11-08 04:04:23.329182] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:48.438 [2024-11-08 04:04:23.329189] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:48.438 [2024-11-08 04:04:23.329295] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:48.438 [2024-11-08 04:04:23.329307] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:48.438 [2024-11-08 04:04:23.329317] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.438 [2024-11-08 04:04:23.329321] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.438 [2024-11-08 04:04:23.329325] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e30d30) 00:20:48.438 [2024-11-08 04:04:23.329332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.438 [2024-11-08 04:04:23.329349] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8ef30, cid 0, qid 0 00:20:48.438 [2024-11-08 04:04:23.329503] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.438 [2024-11-08 04:04:23.329538] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.438 [2024-11-08 04:04:23.329542] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.438 [2024-11-08 04:04:23.329545] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8ef30) on tqpair=0x1e30d30 00:20:48.439 [2024-11-08 04:04:23.329552] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:48.439 [2024-11-08 04:04:23.329563] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.439 [2024-11-08 04:04:23.329567] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.439 [2024-11-08 04:04:23.329571] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e30d30) 00:20:48.439 [2024-11-08 04:04:23.329578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.439 [2024-11-08 04:04:23.329598] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8ef30, cid 0, qid 0 00:20:48.439 [2024-11-08 04:04:23.329670] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.439 [2024-11-08 04:04:23.329686] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.439 [2024-11-08 04:04:23.329689] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.439 [2024-11-08 04:04:23.329693] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8ef30) on tqpair=0x1e30d30 00:20:48.439 [2024-11-08 04:04:23.329699] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:48.439 [2024-11-08 04:04:23.329704] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:48.439 [2024-11-08 04:04:23.329712] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:48.439 [2024-11-08 04:04:23.329729] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:48.439 [2024-11-08 04:04:23.329739] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.439 [2024-11-08 04:04:23.329744] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.439 [2024-11-08 04:04:23.329747] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e30d30) 00:20:48.439 [2024-11-08 04:04:23.329755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.439 [2024-11-08 04:04:23.329774] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8ef30, cid 0, qid 0 00:20:48.439 [2024-11-08 04:04:23.329968] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:48.439 [2024-11-08 04:04:23.329975] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:48.439 [2024-11-08 04:04:23.329978] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:48.439 [2024-11-08 04:04:23.329982] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e30d30): datao=0, datal=4096, cccid=0 00:20:48.439 [2024-11-08 04:04:23.329987] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e8ef30) on tqpair(0x1e30d30): expected_datao=0, payload_size=4096 00:20:48.439 [2024-11-08 04:04:23.329995] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:48.439 [2024-11-08 04:04:23.330000] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:48.439 [2024-11-08 04:04:23.330007] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.439 [2024-11-08 04:04:23.330013] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.439 [2024-11-08 04:04:23.330016] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.439 [2024-11-08 04:04:23.330019] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8ef30) on tqpair=0x1e30d30 00:20:48.439 [2024-11-08 04:04:23.330028] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:48.439 [2024-11-08 04:04:23.330034] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:48.439 [2024-11-08 04:04:23.330038] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:48.439 [2024-11-08 04:04:23.330043] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:48.439 [2024-11-08 04:04:23.330048] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:48.439 [2024-11-08 04:04:23.330052] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:48.439 [2024-11-08 04:04:23.330065] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:48.439 [2024-11-08 04:04:23.330073] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.439 [2024-11-08 04:04:23.330076] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.439 [2024-11-08 04:04:23.330080] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e30d30) 00:20:48.439 [2024-11-08 04:04:23.330087] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:48.439 [2024-11-08 04:04:23.330105] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8ef30, cid 0, qid 0 00:20:48.439 [2024-11-08 04:04:23.330182] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.439 [2024-11-08 04:04:23.330188] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.439 [2024-11-08 04:04:23.330191] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.439 [2024-11-08 04:04:23.330195] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8ef30) on tqpair=0x1e30d30 00:20:48.439 [2024-11-08 04:04:23.330203] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.439 [2024-11-08 04:04:23.330207] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.439 [2024-11-08 04:04:23.330210] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e30d30) 00:20:48.439 [2024-11-08 04:04:23.330216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.439 [2024-11-08 04:04:23.330221] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.439 [2024-11-08 04:04:23.330225] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.439 [2024-11-08 04:04:23.330228] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1e30d30) 00:20:48.439 [2024-11-08 04:04:23.330233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.439 [2024-11-08 04:04:23.330239] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.439 [2024-11-08 04:04:23.330242] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.439 [2024-11-08 04:04:23.330245] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1e30d30) 00:20:48.439 [2024-11-08 04:04:23.330250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.439 [2024-11-08 04:04:23.330255] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.439 [2024-11-08 04:04:23.330259] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.439 [2024-11-08 04:04:23.330262] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e30d30) 00:20:48.439 [2024-11-08 04:04:23.330267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.439 [2024-11-08 04:04:23.330271] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:48.439 [2024-11-08 04:04:23.330283] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:48.439 [2024-11-08 04:04:23.330290] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.439 [2024-11-08 04:04:23.330293] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.439 [2024-11-08 04:04:23.330296] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e30d30) 00:20:48.439 [2024-11-08 04:04:23.330302] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.439 [2024-11-08 04:04:23.330321] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8ef30, cid 0, qid 0 00:20:48.439 [2024-11-08 04:04:23.330328] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f090, cid 1, qid 0 00:20:48.439 [2024-11-08 04:04:23.330332] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f1f0, cid 2, qid 0 00:20:48.439 [2024-11-08 04:04:23.330337] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f350, cid 3, qid 0 00:20:48.439 [2024-11-08 04:04:23.330341] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f4b0, cid 4, qid 0 00:20:48.439 [2024-11-08 04:04:23.330463] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.439 [2024-11-08 04:04:23.330469] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.439 [2024-11-08 04:04:23.330472] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.439 [2024-11-08 04:04:23.330475] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f4b0) on tqpair=0x1e30d30 00:20:48.439 [2024-11-08 04:04:23.330482] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:48.439 [2024-11-08 04:04:23.330488] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:48.439 [2024-11-08 04:04:23.330498] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.439 [2024-11-08 04:04:23.330503] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.439 [2024-11-08 04:04:23.330519] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e30d30) 00:20:48.439 [2024-11-08 04:04:23.330526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.439 [2024-11-08 04:04:23.330544] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f4b0, cid 4, qid 0 00:20:48.439 [2024-11-08 04:04:23.330626] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:48.439 [2024-11-08 04:04:23.330633] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:48.439 [2024-11-08 04:04:23.330636] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:48.439 [2024-11-08 04:04:23.330640] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e30d30): datao=0, datal=4096, cccid=4 00:20:48.439 [2024-11-08 04:04:23.330644] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e8f4b0) on tqpair(0x1e30d30): expected_datao=0, payload_size=4096 00:20:48.439 [2024-11-08 04:04:23.330651] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:48.439 [2024-11-08 04:04:23.330655] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:48.439 [2024-11-08 04:04:23.330663] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.439 [2024-11-08 04:04:23.330668] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.439 [2024-11-08 04:04:23.330671] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.439 [2024-11-08 04:04:23.330675] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f4b0) on tqpair=0x1e30d30 00:20:48.439 [2024-11-08 04:04:23.330688] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:48.439 [2024-11-08 04:04:23.330714] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.439 [2024-11-08 04:04:23.330719] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.439 [2024-11-08 04:04:23.330723] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e30d30) 00:20:48.439 [2024-11-08 04:04:23.330730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.440 [2024-11-08 04:04:23.330736] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.440 [2024-11-08 04:04:23.330740] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.440 [2024-11-08 04:04:23.330743] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e30d30) 00:20:48.440 [2024-11-08 04:04:23.330749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.440 [2024-11-08 04:04:23.330772] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f4b0, cid 4, qid 0 00:20:48.440 [2024-11-08 04:04:23.330778] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f610, cid 5, qid 0 00:20:48.440 [2024-11-08 04:04:23.330914] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:48.440 [2024-11-08 04:04:23.330921] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:48.440 [2024-11-08 04:04:23.330924] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:48.440 [2024-11-08 04:04:23.330928] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e30d30): datao=0, datal=1024, cccid=4 00:20:48.440 [2024-11-08 04:04:23.330932] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e8f4b0) on tqpair(0x1e30d30): expected_datao=0, payload_size=1024 00:20:48.440 [2024-11-08 04:04:23.330938] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:48.440 [2024-11-08 04:04:23.330942] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:48.440 [2024-11-08 04:04:23.330947] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.440 [2024-11-08 04:04:23.330952] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.440 [2024-11-08 04:04:23.330955] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.440 [2024-11-08 04:04:23.330958] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f610) on tqpair=0x1e30d30 00:20:48.440 [2024-11-08 04:04:23.375429] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.440 [2024-11-08 04:04:23.375449] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.440 [2024-11-08 04:04:23.375453] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.440 [2024-11-08 04:04:23.375457] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f4b0) on tqpair=0x1e30d30 00:20:48.440 [2024-11-08 04:04:23.375478] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.440 [2024-11-08 04:04:23.375484] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.440 [2024-11-08 04:04:23.375487] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e30d30) 00:20:48.440 [2024-11-08 04:04:23.375495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.440 [2024-11-08 04:04:23.375525] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f4b0, cid 4, qid 0 00:20:48.440 [2024-11-08 04:04:23.375624] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:48.440 [2024-11-08 04:04:23.375630] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:48.440 [2024-11-08 04:04:23.375634] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:48.440 [2024-11-08 04:04:23.375637] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e30d30): datao=0, datal=3072, cccid=4 00:20:48.440 [2024-11-08 04:04:23.375641] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e8f4b0) on tqpair(0x1e30d30): expected_datao=0, payload_size=3072 00:20:48.440 [2024-11-08 04:04:23.375648] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:48.440 [2024-11-08 04:04:23.375651] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:48.440 [2024-11-08 04:04:23.375659] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.440 [2024-11-08 04:04:23.375664] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.440 [2024-11-08 04:04:23.375667] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.440 [2024-11-08 04:04:23.375670] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f4b0) on tqpair=0x1e30d30 00:20:48.440 [2024-11-08 04:04:23.375680] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.440 [2024-11-08 04:04:23.375684] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.440 [2024-11-08 04:04:23.375687] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e30d30) 00:20:48.440 [2024-11-08 04:04:23.375693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.440 [2024-11-08 04:04:23.375724] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f4b0, cid 4, qid 0 00:20:48.440 [2024-11-08 04:04:23.375829] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:48.440 [2024-11-08 04:04:23.375834] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:48.440 [2024-11-08 04:04:23.375838] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:48.440 [2024-11-08 04:04:23.375841] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e30d30): datao=0, datal=8, cccid=4 00:20:48.440 [2024-11-08 04:04:23.375845] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e8f4b0) on tqpair(0x1e30d30): expected_datao=0, payload_size=8 00:20:48.440 [2024-11-08 04:04:23.375851] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:48.440 [2024-11-08 04:04:23.375854] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:48.440 ===================================================== 00:20:48.440 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:48.440 ===================================================== 00:20:48.440 Controller Capabilities/Features 00:20:48.440 ================================ 00:20:48.440 Vendor ID: 0000 00:20:48.440 Subsystem Vendor ID: 0000 00:20:48.440 Serial Number: .................... 00:20:48.440 Model Number: ........................................ 00:20:48.440 Firmware Version: 24.01.1 00:20:48.440 Recommended Arb Burst: 0 00:20:48.440 IEEE OUI Identifier: 00 00 00 00:20:48.440 Multi-path I/O 00:20:48.440 May have multiple subsystem ports: No 00:20:48.440 May have multiple controllers: No 00:20:48.440 Associated with SR-IOV VF: No 00:20:48.440 Max Data Transfer Size: 131072 00:20:48.440 Max Number of Namespaces: 0 00:20:48.440 Max Number of I/O Queues: 1024 00:20:48.440 NVMe Specification Version (VS): 1.3 00:20:48.440 NVMe Specification Version (Identify): 1.3 00:20:48.440 Maximum Queue Entries: 128 00:20:48.440 Contiguous Queues Required: Yes 00:20:48.440 Arbitration Mechanisms Supported 00:20:48.440 Weighted Round Robin: Not Supported 00:20:48.440 Vendor Specific: Not Supported 00:20:48.440 Reset Timeout: 15000 ms 00:20:48.440 Doorbell Stride: 4 bytes 00:20:48.440 NVM Subsystem Reset: Not Supported 00:20:48.440 Command Sets Supported 00:20:48.440 NVM Command Set: Supported 00:20:48.440 Boot Partition: Not Supported 00:20:48.440 Memory Page Size Minimum: 4096 bytes 00:20:48.440 Memory Page Size Maximum: 4096 bytes 00:20:48.440 Persistent Memory Region: Not Supported 00:20:48.440 Optional Asynchronous Events Supported 00:20:48.440 Namespace Attribute Notices: Not Supported 00:20:48.440 Firmware Activation Notices: Not Supported 00:20:48.440 ANA Change Notices: Not Supported 00:20:48.440 PLE Aggregate Log Change Notices: Not Supported 00:20:48.440 LBA Status Info Alert Notices: Not Supported 00:20:48.440 EGE Aggregate Log Change Notices: Not Supported 00:20:48.440 Normal NVM Subsystem Shutdown event: Not Supported 00:20:48.440 Zone Descriptor Change Notices: Not Supported 00:20:48.440 Discovery Log Change Notices: Supported 00:20:48.440 Controller Attributes 00:20:48.440 128-bit Host Identifier: Not Supported 00:20:48.440 Non-Operational Permissive Mode: Not Supported 00:20:48.440 NVM Sets: Not Supported 00:20:48.440 Read Recovery Levels: Not Supported 00:20:48.440 Endurance Groups: Not Supported 00:20:48.440 Predictable Latency Mode: Not Supported 00:20:48.440 Traffic Based Keep ALive: Not Supported 00:20:48.440 Namespace Granularity: Not Supported 00:20:48.440 SQ Associations: Not Supported 00:20:48.440 UUID List: Not Supported 00:20:48.440 Multi-Domain Subsystem: Not Supported 00:20:48.440 Fixed Capacity Management: Not Supported 00:20:48.440 Variable Capacity Management: Not Supported 00:20:48.440 Delete Endurance Group: Not Supported 00:20:48.440 Delete NVM Set: Not Supported 00:20:48.440 Extended LBA Formats Supported: Not Supported 00:20:48.440 Flexible Data Placement Supported: Not Supported 00:20:48.440 00:20:48.440 Controller Memory Buffer Support 00:20:48.440 ================================ 00:20:48.440 Supported: No 00:20:48.440 00:20:48.440 Persistent Memory Region Support 00:20:48.440 ================================ 00:20:48.440 Supported: No 00:20:48.440 00:20:48.440 Admin Command Set Attributes 00:20:48.440 ============================ 00:20:48.440 Security Send/Receive: Not Supported 00:20:48.440 Format NVM: Not Supported 00:20:48.440 Firmware Activate/Download: Not Supported 00:20:48.440 Namespace Management: Not Supported 00:20:48.440 Device Self-Test: Not Supported 00:20:48.440 Directives: Not Supported 00:20:48.440 NVMe-MI: Not Supported 00:20:48.440 Virtualization Management: Not Supported 00:20:48.440 Doorbell Buffer Config: Not Supported 00:20:48.440 Get LBA Status Capability: Not Supported 00:20:48.440 Command & Feature Lockdown Capability: Not Supported 00:20:48.440 Abort Command Limit: 1 00:20:48.440 Async Event Request Limit: 4 00:20:48.440 Number of Firmware Slots: N/A 00:20:48.440 Firmware Slot 1 Read-Only: N/A 00:20:48.440 Firmware Activation Without Reset: N/A 00:20:48.440 Multiple Update Detection Support: N/A 00:20:48.440 Firmware Update Granularity: No Information Provided 00:20:48.440 Per-Namespace SMART Log: No 00:20:48.440 Asymmetric Namespace Access Log Page: Not Supported 00:20:48.440 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:48.440 Command Effects Log Page: Not Supported 00:20:48.440 Get Log Page Extended Data: Supported 00:20:48.440 Telemetry Log Pages: Not Supported 00:20:48.440 Persistent Event Log Pages: Not Supported 00:20:48.440 Supported Log Pages Log Page: May Support 00:20:48.440 Commands Supported & Effects Log Page: Not Supported 00:20:48.440 Feature Identifiers & Effects Log Page:May Support 00:20:48.440 NVMe-MI Commands & Effects Log Page: May Support 00:20:48.440 Data Area 4 for Telemetry Log: Not Supported 00:20:48.441 Error Log Page Entries Supported: 128 00:20:48.441 Keep Alive: Not Supported 00:20:48.441 00:20:48.441 NVM Command Set Attributes 00:20:48.441 ========================== 00:20:48.441 Submission Queue Entry Size 00:20:48.441 Max: 1 00:20:48.441 Min: 1 00:20:48.441 Completion Queue Entry Size 00:20:48.441 Max: 1 00:20:48.441 Min: 1 00:20:48.441 Number of Namespaces: 0 00:20:48.441 Compare Command: Not Supported 00:20:48.441 Write Uncorrectable Command: Not Supported 00:20:48.441 Dataset Management Command: Not Supported 00:20:48.441 Write Zeroes Command: Not Supported 00:20:48.441 Set Features Save Field: Not Supported 00:20:48.441 Reservations: Not Supported 00:20:48.441 Timestamp: Not Supported 00:20:48.441 Copy: Not Supported 00:20:48.441 Volatile Write Cache: Not Present 00:20:48.441 Atomic Write Unit (Normal): 1 00:20:48.441 Atomic Write Unit (PFail): 1 00:20:48.441 Atomic Compare & Write Unit: 1 00:20:48.441 Fused Compare & Write: Supported 00:20:48.441 Scatter-Gather List 00:20:48.441 SGL Command Set: Supported 00:20:48.441 SGL Keyed: Supported 00:20:48.441 SGL Bit Bucket Descriptor: Not Supported 00:20:48.441 SGL Metadata Pointer: Not Supported 00:20:48.441 Oversized SGL: Not Supported 00:20:48.441 SGL Metadata Address: Not Supported 00:20:48.441 SGL Offset: Supported 00:20:48.441 Transport SGL Data Block: Not Supported 00:20:48.441 Replay Protected Memory Block: Not Supported 00:20:48.441 00:20:48.441 Firmware Slot Information 00:20:48.441 ========================= 00:20:48.441 Active slot: 0 00:20:48.441 00:20:48.441 00:20:48.441 Error Log 00:20:48.441 ========= 00:20:48.441 00:20:48.441 Active Namespaces 00:20:48.441 ================= 00:20:48.441 Discovery Log Page 00:20:48.441 ================== 00:20:48.441 Generation Counter: 2 00:20:48.441 Number of Records: 2 00:20:48.441 Record Format: 0 00:20:48.441 00:20:48.441 Discovery Log Entry 0 00:20:48.441 ---------------------- 00:20:48.441 Transport Type: 3 (TCP) 00:20:48.441 Address Family: 1 (IPv4) 00:20:48.441 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:48.441 Entry Flags: 00:20:48.441 Duplicate Returned Information: 1 00:20:48.441 Explicit Persistent Connection Support for Discovery: 1 00:20:48.441 Transport Requirements: 00:20:48.441 Secure Channel: Not Required 00:20:48.441 Port ID: 0 (0x0000) 00:20:48.441 Controller ID: 65535 (0xffff) 00:20:48.441 Admin Max SQ Size: 128 00:20:48.441 Transport Service Identifier: 4420 00:20:48.441 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:48.441 Transport Address: 10.0.0.2 00:20:48.441 Discovery Log Entry 1 00:20:48.441 ---------------------- 00:20:48.441 Transport Type: 3 (TCP) 00:20:48.441 Address Family: 1 (IPv4) 00:20:48.441 Subsystem Type: 2 (NVM Subsystem) 00:20:48.441 Entry Flags: 00:20:48.441 Duplicate Returned Information: 0 00:20:48.441 Explicit Persistent Connection Support for Discovery: 0 00:20:48.441 Transport Requirements: 00:20:48.441 Secure Channel: Not Required 00:20:48.441 Port ID: 0 (0x0000) 00:20:48.441 Controller ID: 65535 (0xffff) 00:20:48.441 Admin Max SQ Size: 128 00:20:48.441 Transport Service Identifier: 4420 00:20:48.441 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:48.441 Transport Address: 10.0.0.2 [2024-11-08 04:04:23.418443] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.441 [2024-11-08 04:04:23.418461] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.441 [2024-11-08 04:04:23.418465] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.441 [2024-11-08 04:04:23.418469] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f4b0) on tqpair=0x1e30d30 00:20:48.441 [2024-11-08 04:04:23.418572] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:48.441 [2024-11-08 04:04:23.418590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.441 [2024-11-08 04:04:23.418596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.441 [2024-11-08 04:04:23.418602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.441 [2024-11-08 04:04:23.418607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.441 [2024-11-08 04:04:23.418615] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.441 [2024-11-08 04:04:23.418619] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.441 [2024-11-08 04:04:23.418622] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e30d30) 00:20:48.441 [2024-11-08 04:04:23.418630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.441 [2024-11-08 04:04:23.418652] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f350, cid 3, qid 0 00:20:48.441 [2024-11-08 04:04:23.418713] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.441 [2024-11-08 04:04:23.418719] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.441 [2024-11-08 04:04:23.418722] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.441 [2024-11-08 04:04:23.418725] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f350) on tqpair=0x1e30d30 00:20:48.441 [2024-11-08 04:04:23.418733] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.441 [2024-11-08 04:04:23.418737] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.441 [2024-11-08 04:04:23.418740] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e30d30) 00:20:48.441 [2024-11-08 04:04:23.418746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.441 [2024-11-08 04:04:23.418765] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f350, cid 3, qid 0 00:20:48.441 [2024-11-08 04:04:23.418851] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.441 [2024-11-08 04:04:23.418857] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.441 [2024-11-08 04:04:23.418860] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.441 [2024-11-08 04:04:23.418863] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f350) on tqpair=0x1e30d30 00:20:48.441 [2024-11-08 04:04:23.418868] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:48.441 [2024-11-08 04:04:23.418873] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:48.441 [2024-11-08 04:04:23.418881] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.441 [2024-11-08 04:04:23.418885] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.441 [2024-11-08 04:04:23.418888] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e30d30) 00:20:48.441 [2024-11-08 04:04:23.418894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.441 [2024-11-08 04:04:23.418909] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f350, cid 3, qid 0 00:20:48.441 [2024-11-08 04:04:23.418974] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.441 [2024-11-08 04:04:23.418980] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.441 [2024-11-08 04:04:23.418983] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.441 [2024-11-08 04:04:23.418987] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f350) on tqpair=0x1e30d30 00:20:48.441 [2024-11-08 04:04:23.418996] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.441 [2024-11-08 04:04:23.419000] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.441 [2024-11-08 04:04:23.419003] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e30d30) 00:20:48.441 [2024-11-08 04:04:23.419009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.441 [2024-11-08 04:04:23.419024] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f350, cid 3, qid 0 00:20:48.441 [2024-11-08 04:04:23.419087] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.441 [2024-11-08 04:04:23.419092] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.441 [2024-11-08 04:04:23.419095] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.441 [2024-11-08 04:04:23.419099] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f350) on tqpair=0x1e30d30 00:20:48.441 [2024-11-08 04:04:23.419108] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.441 [2024-11-08 04:04:23.419111] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.441 [2024-11-08 04:04:23.419115] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e30d30) 00:20:48.441 [2024-11-08 04:04:23.419120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.441 [2024-11-08 04:04:23.419135] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f350, cid 3, qid 0 00:20:48.441 [2024-11-08 04:04:23.419208] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.441 [2024-11-08 04:04:23.419214] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.441 [2024-11-08 04:04:23.419217] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.441 [2024-11-08 04:04:23.419221] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f350) on tqpair=0x1e30d30 00:20:48.441 [2024-11-08 04:04:23.419230] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.441 [2024-11-08 04:04:23.419233] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.441 [2024-11-08 04:04:23.419236] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e30d30) 00:20:48.441 [2024-11-08 04:04:23.419242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.441 [2024-11-08 04:04:23.419256] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f350, cid 3, qid 0 00:20:48.441 [2024-11-08 04:04:23.419320] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.442 [2024-11-08 04:04:23.419325] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.442 [2024-11-08 04:04:23.419329] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.442 [2024-11-08 04:04:23.419332] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f350) on tqpair=0x1e30d30 00:20:48.442 [2024-11-08 04:04:23.419341] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.442 [2024-11-08 04:04:23.419345] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.442 [2024-11-08 04:04:23.419348] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e30d30) 00:20:48.442 [2024-11-08 04:04:23.419354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.442 [2024-11-08 04:04:23.419368] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f350, cid 3, qid 0 00:20:48.442 [2024-11-08 04:04:23.419447] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.442 [2024-11-08 04:04:23.419454] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.442 [2024-11-08 04:04:23.419458] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.442 [2024-11-08 04:04:23.419461] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f350) on tqpair=0x1e30d30 00:20:48.442 [2024-11-08 04:04:23.419471] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.442 [2024-11-08 04:04:23.419476] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.442 [2024-11-08 04:04:23.419479] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e30d30) 00:20:48.442 [2024-11-08 04:04:23.419485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.442 [2024-11-08 04:04:23.419502] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f350, cid 3, qid 0 00:20:48.442 [2024-11-08 04:04:23.419571] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.442 [2024-11-08 04:04:23.419577] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.442 [2024-11-08 04:04:23.419580] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.442 [2024-11-08 04:04:23.419584] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f350) on tqpair=0x1e30d30 00:20:48.442 [2024-11-08 04:04:23.419593] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.442 [2024-11-08 04:04:23.419596] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.442 [2024-11-08 04:04:23.419600] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e30d30) 00:20:48.442 [2024-11-08 04:04:23.419605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.442 [2024-11-08 04:04:23.419620] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f350, cid 3, qid 0 00:20:48.442 [2024-11-08 04:04:23.419687] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.442 [2024-11-08 04:04:23.419693] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.442 [2024-11-08 04:04:23.419696] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.442 [2024-11-08 04:04:23.419699] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f350) on tqpair=0x1e30d30 00:20:48.442 [2024-11-08 04:04:23.419708] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.442 [2024-11-08 04:04:23.419712] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.442 [2024-11-08 04:04:23.419715] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e30d30) 00:20:48.442 [2024-11-08 04:04:23.419721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.442 [2024-11-08 04:04:23.419735] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f350, cid 3, qid 0 00:20:48.442 [2024-11-08 04:04:23.419796] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.442 [2024-11-08 04:04:23.419802] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.442 [2024-11-08 04:04:23.419805] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.442 [2024-11-08 04:04:23.419808] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f350) on tqpair=0x1e30d30 00:20:48.442 [2024-11-08 04:04:23.419817] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.442 [2024-11-08 04:04:23.419821] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.442 [2024-11-08 04:04:23.419824] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e30d30) 00:20:48.442 [2024-11-08 04:04:23.419830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.442 [2024-11-08 04:04:23.419844] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f350, cid 3, qid 0 00:20:48.442 [2024-11-08 04:04:23.419902] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.442 [2024-11-08 04:04:23.419908] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.442 [2024-11-08 04:04:23.419911] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.442 [2024-11-08 04:04:23.419915] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f350) on tqpair=0x1e30d30 00:20:48.442 [2024-11-08 04:04:23.419924] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.442 [2024-11-08 04:04:23.419927] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.442 [2024-11-08 04:04:23.419931] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e30d30) 00:20:48.442 [2024-11-08 04:04:23.419936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.442 [2024-11-08 04:04:23.419951] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f350, cid 3, qid 0 00:20:48.442 [2024-11-08 04:04:23.420013] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.442 [2024-11-08 04:04:23.420019] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.442 [2024-11-08 04:04:23.420023] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.442 [2024-11-08 04:04:23.420026] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f350) on tqpair=0x1e30d30 00:20:48.442 [2024-11-08 04:04:23.420035] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.442 [2024-11-08 04:04:23.420039] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.442 [2024-11-08 04:04:23.420042] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e30d30) 00:20:48.442 [2024-11-08 04:04:23.420048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.442 [2024-11-08 04:04:23.420063] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f350, cid 3, qid 0 00:20:48.442 [2024-11-08 04:04:23.420130] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.442 [2024-11-08 04:04:23.420136] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.442 [2024-11-08 04:04:23.420139] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.442 [2024-11-08 04:04:23.420142] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f350) on tqpair=0x1e30d30 00:20:48.442 [2024-11-08 04:04:23.420151] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.442 [2024-11-08 04:04:23.420155] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.442 [2024-11-08 04:04:23.420158] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e30d30) 00:20:48.442 [2024-11-08 04:04:23.420164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.442 [2024-11-08 04:04:23.420178] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f350, cid 3, qid 0 00:20:48.442 [2024-11-08 04:04:23.420238] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.442 [2024-11-08 04:04:23.420243] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.442 [2024-11-08 04:04:23.420246] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.442 [2024-11-08 04:04:23.420250] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f350) on tqpair=0x1e30d30 00:20:48.442 [2024-11-08 04:04:23.420259] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.442 [2024-11-08 04:04:23.420262] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.442 [2024-11-08 04:04:23.420265] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e30d30) 00:20:48.442 [2024-11-08 04:04:23.420271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.442 [2024-11-08 04:04:23.420285] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f350, cid 3, qid 0 00:20:48.442 [2024-11-08 04:04:23.420350] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.442 [2024-11-08 04:04:23.420361] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.442 [2024-11-08 04:04:23.420365] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.442 [2024-11-08 04:04:23.420368] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f350) on tqpair=0x1e30d30 00:20:48.442 [2024-11-08 04:04:23.420378] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.442 [2024-11-08 04:04:23.420382] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.442 [2024-11-08 04:04:23.420385] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e30d30) 00:20:48.443 [2024-11-08 04:04:23.420391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.443 [2024-11-08 04:04:23.420406] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f350, cid 3, qid 0 00:20:48.443 [2024-11-08 04:04:23.420479] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.443 [2024-11-08 04:04:23.420490] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.443 [2024-11-08 04:04:23.420494] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.443 [2024-11-08 04:04:23.420498] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f350) on tqpair=0x1e30d30 00:20:48.443 [2024-11-08 04:04:23.420508] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.443 [2024-11-08 04:04:23.420512] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.443 [2024-11-08 04:04:23.420515] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e30d30) 00:20:48.443 [2024-11-08 04:04:23.420521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.443 [2024-11-08 04:04:23.420539] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f350, cid 3, qid 0 00:20:48.443 [2024-11-08 04:04:23.420598] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.443 [2024-11-08 04:04:23.420603] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.443 [2024-11-08 04:04:23.420606] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.443 [2024-11-08 04:04:23.420610] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f350) on tqpair=0x1e30d30 00:20:48.443 [2024-11-08 04:04:23.420619] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.443 [2024-11-08 04:04:23.420623] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.443 [2024-11-08 04:04:23.420626] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e30d30) 00:20:48.443 [2024-11-08 04:04:23.420632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.443 [2024-11-08 04:04:23.420646] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f350, cid 3, qid 0 00:20:48.443 [2024-11-08 04:04:23.420703] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.443 [2024-11-08 04:04:23.420709] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.443 [2024-11-08 04:04:23.420712] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.443 [2024-11-08 04:04:23.420715] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f350) on tqpair=0x1e30d30 00:20:48.443 [2024-11-08 04:04:23.420725] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.443 [2024-11-08 04:04:23.420728] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.443 [2024-11-08 04:04:23.420731] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e30d30) 00:20:48.443 [2024-11-08 04:04:23.420737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.443 [2024-11-08 04:04:23.420751] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f350, cid 3, qid 0 00:20:48.443 [2024-11-08 04:04:23.420812] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.443 [2024-11-08 04:04:23.420818] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.443 [2024-11-08 04:04:23.420821] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.443 [2024-11-08 04:04:23.420824] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f350) on tqpair=0x1e30d30 00:20:48.443 [2024-11-08 04:04:23.420833] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.443 [2024-11-08 04:04:23.420837] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.443 [2024-11-08 04:04:23.420840] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e30d30) 00:20:48.443 [2024-11-08 04:04:23.420845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.443 [2024-11-08 04:04:23.420859] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f350, cid 3, qid 0 00:20:48.443 [2024-11-08 04:04:23.420920] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.443 [2024-11-08 04:04:23.420925] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.443 [2024-11-08 04:04:23.420928] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.443 [2024-11-08 04:04:23.420932] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f350) on tqpair=0x1e30d30 00:20:48.443 [2024-11-08 04:04:23.420942] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.443 [2024-11-08 04:04:23.420946] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.443 [2024-11-08 04:04:23.420949] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e30d30) 00:20:48.443 [2024-11-08 04:04:23.420955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.443 [2024-11-08 04:04:23.420969] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f350, cid 3, qid 0 00:20:48.443 [2024-11-08 04:04:23.421031] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.443 [2024-11-08 04:04:23.421037] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.443 [2024-11-08 04:04:23.421040] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.443 [2024-11-08 04:04:23.421043] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f350) on tqpair=0x1e30d30 00:20:48.443 [2024-11-08 04:04:23.421052] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.443 [2024-11-08 04:04:23.421056] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.443 [2024-11-08 04:04:23.421059] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e30d30) 00:20:48.443 [2024-11-08 04:04:23.421065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.443 [2024-11-08 04:04:23.421079] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f350, cid 3, qid 0 00:20:48.443 [2024-11-08 04:04:23.421138] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.443 [2024-11-08 04:04:23.421144] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.443 [2024-11-08 04:04:23.421147] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.443 [2024-11-08 04:04:23.421150] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f350) on tqpair=0x1e30d30 00:20:48.443 [2024-11-08 04:04:23.421159] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.443 [2024-11-08 04:04:23.421163] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.443 [2024-11-08 04:04:23.421166] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e30d30) 00:20:48.443 [2024-11-08 04:04:23.421172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.443 [2024-11-08 04:04:23.421186] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f350, cid 3, qid 0 00:20:48.443 [2024-11-08 04:04:23.421247] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.443 [2024-11-08 04:04:23.421253] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.443 [2024-11-08 04:04:23.421256] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.443 [2024-11-08 04:04:23.421259] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f350) on tqpair=0x1e30d30 00:20:48.443 [2024-11-08 04:04:23.421269] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.443 [2024-11-08 04:04:23.421273] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.443 [2024-11-08 04:04:23.421276] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e30d30) 00:20:48.443 [2024-11-08 04:04:23.421282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.443 [2024-11-08 04:04:23.421297] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f350, cid 3, qid 0 00:20:48.443 [2024-11-08 04:04:23.421359] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.443 [2024-11-08 04:04:23.421364] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.443 [2024-11-08 04:04:23.421367] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.443 [2024-11-08 04:04:23.421371] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f350) on tqpair=0x1e30d30 00:20:48.443 [2024-11-08 04:04:23.421380] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.443 [2024-11-08 04:04:23.421383] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.443 [2024-11-08 04:04:23.421386] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e30d30) 00:20:48.443 [2024-11-08 04:04:23.421392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.443 [2024-11-08 04:04:23.421406] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f350, cid 3, qid 0 00:20:48.443 [2024-11-08 04:04:23.421504] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.443 [2024-11-08 04:04:23.421512] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.443 [2024-11-08 04:04:23.421516] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.443 [2024-11-08 04:04:23.421519] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f350) on tqpair=0x1e30d30 00:20:48.443 [2024-11-08 04:04:23.421529] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.443 [2024-11-08 04:04:23.421533] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.443 [2024-11-08 04:04:23.421536] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e30d30) 00:20:48.443 [2024-11-08 04:04:23.421542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.443 [2024-11-08 04:04:23.421559] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f350, cid 3, qid 0 00:20:48.443 [2024-11-08 04:04:23.421619] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.443 [2024-11-08 04:04:23.421625] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.443 [2024-11-08 04:04:23.421628] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.443 [2024-11-08 04:04:23.421631] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f350) on tqpair=0x1e30d30 00:20:48.443 [2024-11-08 04:04:23.421640] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.443 [2024-11-08 04:04:23.421644] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.443 [2024-11-08 04:04:23.421647] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e30d30) 00:20:48.443 [2024-11-08 04:04:23.421653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.443 [2024-11-08 04:04:23.421667] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f350, cid 3, qid 0 00:20:48.443 [2024-11-08 04:04:23.421727] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.443 [2024-11-08 04:04:23.421733] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.443 [2024-11-08 04:04:23.421736] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.443 [2024-11-08 04:04:23.421739] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f350) on tqpair=0x1e30d30 00:20:48.443 [2024-11-08 04:04:23.421748] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.443 [2024-11-08 04:04:23.421752] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.443 [2024-11-08 04:04:23.421755] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e30d30) 00:20:48.444 [2024-11-08 04:04:23.421761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.444 [2024-11-08 04:04:23.421775] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f350, cid 3, qid 0 00:20:48.444 [2024-11-08 04:04:23.421841] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.444 [2024-11-08 04:04:23.421847] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.444 [2024-11-08 04:04:23.421850] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.444 [2024-11-08 04:04:23.421855] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f350) on tqpair=0x1e30d30 00:20:48.444 [2024-11-08 04:04:23.421864] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.444 [2024-11-08 04:04:23.421868] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.444 [2024-11-08 04:04:23.421871] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e30d30) 00:20:48.444 [2024-11-08 04:04:23.421877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.444 [2024-11-08 04:04:23.421892] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f350, cid 3, qid 0 00:20:48.444 [2024-11-08 04:04:23.421953] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.444 [2024-11-08 04:04:23.421959] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.444 [2024-11-08 04:04:23.421962] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.444 [2024-11-08 04:04:23.421965] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f350) on tqpair=0x1e30d30 00:20:48.444 [2024-11-08 04:04:23.421974] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.444 [2024-11-08 04:04:23.421978] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.444 [2024-11-08 04:04:23.421981] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e30d30) 00:20:48.444 [2024-11-08 04:04:23.421988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.444 [2024-11-08 04:04:23.422002] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f350, cid 3, qid 0 00:20:48.444 [2024-11-08 04:04:23.422077] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.444 [2024-11-08 04:04:23.422082] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.444 [2024-11-08 04:04:23.422086] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.444 [2024-11-08 04:04:23.422089] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f350) on tqpair=0x1e30d30 00:20:48.444 [2024-11-08 04:04:23.422098] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.444 [2024-11-08 04:04:23.422102] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.444 [2024-11-08 04:04:23.422105] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e30d30) 00:20:48.444 [2024-11-08 04:04:23.422111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.444 [2024-11-08 04:04:23.422126] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f350, cid 3, qid 0 00:20:48.444 [2024-11-08 04:04:23.422184] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.444 [2024-11-08 04:04:23.422190] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.444 [2024-11-08 04:04:23.422193] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.444 [2024-11-08 04:04:23.422197] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f350) on tqpair=0x1e30d30 00:20:48.444 [2024-11-08 04:04:23.422206] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.444 [2024-11-08 04:04:23.422210] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.444 [2024-11-08 04:04:23.422213] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e30d30) 00:20:48.444 [2024-11-08 04:04:23.422219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.444 [2024-11-08 04:04:23.422233] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f350, cid 3, qid 0 00:20:48.444 [2024-11-08 04:04:23.422299] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.444 [2024-11-08 04:04:23.422304] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.444 [2024-11-08 04:04:23.422307] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.444 [2024-11-08 04:04:23.422311] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f350) on tqpair=0x1e30d30 00:20:48.444 [2024-11-08 04:04:23.422320] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.444 [2024-11-08 04:04:23.422324] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.444 [2024-11-08 04:04:23.422327] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e30d30) 00:20:48.444 [2024-11-08 04:04:23.422333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.444 [2024-11-08 04:04:23.422347] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f350, cid 3, qid 0 00:20:48.444 [2024-11-08 04:04:23.422409] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.444 [2024-11-08 04:04:23.426424] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.444 [2024-11-08 04:04:23.426438] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.444 [2024-11-08 04:04:23.426442] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f350) on tqpair=0x1e30d30 00:20:48.444 [2024-11-08 04:04:23.426456] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.444 [2024-11-08 04:04:23.426460] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.444 [2024-11-08 04:04:23.426464] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e30d30) 00:20:48.444 [2024-11-08 04:04:23.426471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.444 [2024-11-08 04:04:23.426493] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8f350, cid 3, qid 0 00:20:48.444 [2024-11-08 04:04:23.426554] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.444 [2024-11-08 04:04:23.426561] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.444 [2024-11-08 04:04:23.426564] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.444 [2024-11-08 04:04:23.426567] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e8f350) on tqpair=0x1e30d30 00:20:48.444 [2024-11-08 04:04:23.426575] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:20:48.444 00:20:48.444 04:04:23 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:48.444 [2024-11-08 04:04:23.457709] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:48.444 [2024-11-08 04:04:23.457750] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82981 ] 00:20:48.708 [2024-11-08 04:04:23.590859] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:48.709 [2024-11-08 04:04:23.590918] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:48.709 [2024-11-08 04:04:23.590924] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:48.709 [2024-11-08 04:04:23.590933] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:48.709 [2024-11-08 04:04:23.590942] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:48.709 [2024-11-08 04:04:23.591031] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:48.709 [2024-11-08 04:04:23.591074] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x4e1d30 0 00:20:48.709 [2024-11-08 04:04:23.606435] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:48.709 [2024-11-08 04:04:23.606453] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:48.709 [2024-11-08 04:04:23.606458] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:48.709 [2024-11-08 04:04:23.606461] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:48.709 [2024-11-08 04:04:23.606504] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.709 [2024-11-08 04:04:23.606510] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.709 [2024-11-08 04:04:23.606513] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4e1d30) 00:20:48.709 [2024-11-08 04:04:23.606523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:48.709 [2024-11-08 04:04:23.606551] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x53ff30, cid 0, qid 0 00:20:48.709 [2024-11-08 04:04:23.614430] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.709 [2024-11-08 04:04:23.614446] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.709 [2024-11-08 04:04:23.614463] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.709 [2024-11-08 04:04:23.614467] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x53ff30) on tqpair=0x4e1d30 00:20:48.709 [2024-11-08 04:04:23.614479] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:48.709 [2024-11-08 04:04:23.614486] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:48.709 [2024-11-08 04:04:23.614491] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:48.709 [2024-11-08 04:04:23.614504] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.709 [2024-11-08 04:04:23.614509] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.709 [2024-11-08 04:04:23.614512] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4e1d30) 00:20:48.709 [2024-11-08 04:04:23.614520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.709 [2024-11-08 04:04:23.614545] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x53ff30, cid 0, qid 0 00:20:48.709 [2024-11-08 04:04:23.614622] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.709 [2024-11-08 04:04:23.614628] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.709 [2024-11-08 04:04:23.614631] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.709 [2024-11-08 04:04:23.614634] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x53ff30) on tqpair=0x4e1d30 00:20:48.709 [2024-11-08 04:04:23.614639] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:48.709 [2024-11-08 04:04:23.614646] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:48.709 [2024-11-08 04:04:23.614653] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.709 [2024-11-08 04:04:23.614656] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.709 [2024-11-08 04:04:23.614660] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4e1d30) 00:20:48.709 [2024-11-08 04:04:23.614666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.709 [2024-11-08 04:04:23.614684] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x53ff30, cid 0, qid 0 00:20:48.709 [2024-11-08 04:04:23.614788] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.709 [2024-11-08 04:04:23.614794] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.709 [2024-11-08 04:04:23.614797] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.709 [2024-11-08 04:04:23.614801] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x53ff30) on tqpair=0x4e1d30 00:20:48.709 [2024-11-08 04:04:23.614806] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:48.709 [2024-11-08 04:04:23.614813] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:48.709 [2024-11-08 04:04:23.614819] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.709 [2024-11-08 04:04:23.614823] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.709 [2024-11-08 04:04:23.614826] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4e1d30) 00:20:48.709 [2024-11-08 04:04:23.614833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.709 [2024-11-08 04:04:23.614850] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x53ff30, cid 0, qid 0 00:20:48.709 [2024-11-08 04:04:23.614912] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.709 [2024-11-08 04:04:23.614918] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.709 [2024-11-08 04:04:23.614921] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.709 [2024-11-08 04:04:23.614924] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x53ff30) on tqpair=0x4e1d30 00:20:48.709 [2024-11-08 04:04:23.614929] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:48.709 [2024-11-08 04:04:23.614938] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.709 [2024-11-08 04:04:23.614942] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.709 [2024-11-08 04:04:23.614945] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4e1d30) 00:20:48.709 [2024-11-08 04:04:23.614951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.709 [2024-11-08 04:04:23.614968] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x53ff30, cid 0, qid 0 00:20:48.709 [2024-11-08 04:04:23.615031] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.709 [2024-11-08 04:04:23.615041] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.709 [2024-11-08 04:04:23.615045] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.709 [2024-11-08 04:04:23.615049] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x53ff30) on tqpair=0x4e1d30 00:20:48.709 [2024-11-08 04:04:23.615053] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:48.709 [2024-11-08 04:04:23.615059] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:48.709 [2024-11-08 04:04:23.615066] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:48.709 [2024-11-08 04:04:23.615171] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:48.709 [2024-11-08 04:04:23.615180] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:48.709 [2024-11-08 04:04:23.615188] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.709 [2024-11-08 04:04:23.615192] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.709 [2024-11-08 04:04:23.615195] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4e1d30) 00:20:48.709 [2024-11-08 04:04:23.615202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.709 [2024-11-08 04:04:23.615221] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x53ff30, cid 0, qid 0 00:20:48.709 [2024-11-08 04:04:23.615279] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.709 [2024-11-08 04:04:23.615285] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.709 [2024-11-08 04:04:23.615288] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.709 [2024-11-08 04:04:23.615292] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x53ff30) on tqpair=0x4e1d30 00:20:48.709 [2024-11-08 04:04:23.615296] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:48.709 [2024-11-08 04:04:23.615305] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.709 [2024-11-08 04:04:23.615309] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.709 [2024-11-08 04:04:23.615312] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4e1d30) 00:20:48.709 [2024-11-08 04:04:23.615319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.709 [2024-11-08 04:04:23.615335] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x53ff30, cid 0, qid 0 00:20:48.709 [2024-11-08 04:04:23.615410] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.709 [2024-11-08 04:04:23.615427] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.709 [2024-11-08 04:04:23.615432] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.709 [2024-11-08 04:04:23.615435] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x53ff30) on tqpair=0x4e1d30 00:20:48.709 [2024-11-08 04:04:23.615439] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:48.709 [2024-11-08 04:04:23.615444] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:48.709 [2024-11-08 04:04:23.615451] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:48.709 [2024-11-08 04:04:23.615466] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:48.709 [2024-11-08 04:04:23.615475] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.709 [2024-11-08 04:04:23.615478] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.709 [2024-11-08 04:04:23.615482] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4e1d30) 00:20:48.709 [2024-11-08 04:04:23.615488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.710 [2024-11-08 04:04:23.615508] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x53ff30, cid 0, qid 0 00:20:48.710 [2024-11-08 04:04:23.615630] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:48.710 [2024-11-08 04:04:23.615636] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:48.710 [2024-11-08 04:04:23.615639] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:48.710 [2024-11-08 04:04:23.615643] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4e1d30): datao=0, datal=4096, cccid=0 00:20:48.710 [2024-11-08 04:04:23.615647] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x53ff30) on tqpair(0x4e1d30): expected_datao=0, payload_size=4096 00:20:48.710 [2024-11-08 04:04:23.615654] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:48.710 [2024-11-08 04:04:23.615657] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:48.710 [2024-11-08 04:04:23.615665] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.710 [2024-11-08 04:04:23.615670] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.710 [2024-11-08 04:04:23.615673] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.710 [2024-11-08 04:04:23.615677] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x53ff30) on tqpair=0x4e1d30 00:20:48.710 [2024-11-08 04:04:23.615684] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:48.710 [2024-11-08 04:04:23.615689] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:48.710 [2024-11-08 04:04:23.615692] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:48.710 [2024-11-08 04:04:23.615696] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:48.710 [2024-11-08 04:04:23.615700] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:48.710 [2024-11-08 04:04:23.615704] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:48.710 [2024-11-08 04:04:23.615718] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:48.710 [2024-11-08 04:04:23.615725] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.710 [2024-11-08 04:04:23.615729] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.710 [2024-11-08 04:04:23.615732] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4e1d30) 00:20:48.710 [2024-11-08 04:04:23.615739] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:48.710 [2024-11-08 04:04:23.615757] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x53ff30, cid 0, qid 0 00:20:48.710 [2024-11-08 04:04:23.615823] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.710 [2024-11-08 04:04:23.615828] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.710 [2024-11-08 04:04:23.615833] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.710 [2024-11-08 04:04:23.615836] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x53ff30) on tqpair=0x4e1d30 00:20:48.710 [2024-11-08 04:04:23.615842] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.710 [2024-11-08 04:04:23.615846] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.710 [2024-11-08 04:04:23.615849] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4e1d30) 00:20:48.710 [2024-11-08 04:04:23.615855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.710 [2024-11-08 04:04:23.615862] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.710 [2024-11-08 04:04:23.615865] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.710 [2024-11-08 04:04:23.615868] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x4e1d30) 00:20:48.710 [2024-11-08 04:04:23.615873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.710 [2024-11-08 04:04:23.615878] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.710 [2024-11-08 04:04:23.615882] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.710 [2024-11-08 04:04:23.615885] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x4e1d30) 00:20:48.710 [2024-11-08 04:04:23.615889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.710 [2024-11-08 04:04:23.615894] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.710 [2024-11-08 04:04:23.615898] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.710 [2024-11-08 04:04:23.615901] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.710 [2024-11-08 04:04:23.615905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.710 [2024-11-08 04:04:23.615910] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:48.710 [2024-11-08 04:04:23.615922] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:48.710 [2024-11-08 04:04:23.615928] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.710 [2024-11-08 04:04:23.615931] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.710 [2024-11-08 04:04:23.615934] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4e1d30) 00:20:48.710 [2024-11-08 04:04:23.615940] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.710 [2024-11-08 04:04:23.615959] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x53ff30, cid 0, qid 0 00:20:48.710 [2024-11-08 04:04:23.615965] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540090, cid 1, qid 0 00:20:48.710 [2024-11-08 04:04:23.615969] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5401f0, cid 2, qid 0 00:20:48.710 [2024-11-08 04:04:23.615973] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.710 [2024-11-08 04:04:23.615977] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5404b0, cid 4, qid 0 00:20:48.710 [2024-11-08 04:04:23.616079] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.710 [2024-11-08 04:04:23.616085] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.710 [2024-11-08 04:04:23.616088] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.710 [2024-11-08 04:04:23.616091] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5404b0) on tqpair=0x4e1d30 00:20:48.710 [2024-11-08 04:04:23.616096] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:48.710 [2024-11-08 04:04:23.616101] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:48.710 [2024-11-08 04:04:23.616108] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:48.710 [2024-11-08 04:04:23.616119] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:48.710 [2024-11-08 04:04:23.616125] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.710 [2024-11-08 04:04:23.616129] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.710 [2024-11-08 04:04:23.616132] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4e1d30) 00:20:48.710 [2024-11-08 04:04:23.616139] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:48.710 [2024-11-08 04:04:23.616156] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5404b0, cid 4, qid 0 00:20:48.710 [2024-11-08 04:04:23.616229] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.710 [2024-11-08 04:04:23.616235] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.710 [2024-11-08 04:04:23.616238] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.710 [2024-11-08 04:04:23.616241] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5404b0) on tqpair=0x4e1d30 00:20:48.710 [2024-11-08 04:04:23.616291] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:48.710 [2024-11-08 04:04:23.616307] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:48.710 [2024-11-08 04:04:23.616315] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.710 [2024-11-08 04:04:23.616319] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.710 [2024-11-08 04:04:23.616323] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4e1d30) 00:20:48.710 [2024-11-08 04:04:23.616329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.710 [2024-11-08 04:04:23.616347] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5404b0, cid 4, qid 0 00:20:48.710 [2024-11-08 04:04:23.616439] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:48.710 [2024-11-08 04:04:23.616448] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:48.710 [2024-11-08 04:04:23.616452] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:48.710 [2024-11-08 04:04:23.616455] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4e1d30): datao=0, datal=4096, cccid=4 00:20:48.710 [2024-11-08 04:04:23.616459] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5404b0) on tqpair(0x4e1d30): expected_datao=0, payload_size=4096 00:20:48.710 [2024-11-08 04:04:23.616467] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:48.710 [2024-11-08 04:04:23.616471] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:48.710 [2024-11-08 04:04:23.616478] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.710 [2024-11-08 04:04:23.616483] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.710 [2024-11-08 04:04:23.616486] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.710 [2024-11-08 04:04:23.616490] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5404b0) on tqpair=0x4e1d30 00:20:48.710 [2024-11-08 04:04:23.616505] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:48.710 [2024-11-08 04:04:23.616516] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:48.710 [2024-11-08 04:04:23.616526] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:48.710 [2024-11-08 04:04:23.616533] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.710 [2024-11-08 04:04:23.616536] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.710 [2024-11-08 04:04:23.616540] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4e1d30) 00:20:48.710 [2024-11-08 04:04:23.616546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.710 [2024-11-08 04:04:23.616566] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5404b0, cid 4, qid 0 00:20:48.711 [2024-11-08 04:04:23.616662] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:48.711 [2024-11-08 04:04:23.616668] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:48.711 [2024-11-08 04:04:23.616671] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:48.711 [2024-11-08 04:04:23.616675] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4e1d30): datao=0, datal=4096, cccid=4 00:20:48.711 [2024-11-08 04:04:23.616679] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5404b0) on tqpair(0x4e1d30): expected_datao=0, payload_size=4096 00:20:48.711 [2024-11-08 04:04:23.616685] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:48.711 [2024-11-08 04:04:23.616689] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:48.711 [2024-11-08 04:04:23.616696] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.711 [2024-11-08 04:04:23.616701] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.711 [2024-11-08 04:04:23.616704] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.711 [2024-11-08 04:04:23.616708] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5404b0) on tqpair=0x4e1d30 00:20:48.711 [2024-11-08 04:04:23.616723] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:48.711 [2024-11-08 04:04:23.616733] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:48.711 [2024-11-08 04:04:23.616740] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.711 [2024-11-08 04:04:23.616744] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.711 [2024-11-08 04:04:23.616747] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4e1d30) 00:20:48.711 [2024-11-08 04:04:23.616754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.711 [2024-11-08 04:04:23.616772] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5404b0, cid 4, qid 0 00:20:48.711 [2024-11-08 04:04:23.616862] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:48.711 [2024-11-08 04:04:23.616869] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:48.711 [2024-11-08 04:04:23.616872] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:48.711 [2024-11-08 04:04:23.616876] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4e1d30): datao=0, datal=4096, cccid=4 00:20:48.711 [2024-11-08 04:04:23.616880] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5404b0) on tqpair(0x4e1d30): expected_datao=0, payload_size=4096 00:20:48.711 [2024-11-08 04:04:23.616886] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:48.711 [2024-11-08 04:04:23.616890] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:48.711 [2024-11-08 04:04:23.616897] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.711 [2024-11-08 04:04:23.616903] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.711 [2024-11-08 04:04:23.616906] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.711 [2024-11-08 04:04:23.616909] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5404b0) on tqpair=0x4e1d30 00:20:48.711 [2024-11-08 04:04:23.616917] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:48.711 [2024-11-08 04:04:23.616925] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:48.711 [2024-11-08 04:04:23.616937] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:48.711 [2024-11-08 04:04:23.616944] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:48.711 [2024-11-08 04:04:23.616948] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:48.711 [2024-11-08 04:04:23.616953] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:48.711 [2024-11-08 04:04:23.616958] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:48.711 [2024-11-08 04:04:23.616962] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:48.711 [2024-11-08 04:04:23.616975] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.711 [2024-11-08 04:04:23.616979] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.711 [2024-11-08 04:04:23.616982] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4e1d30) 00:20:48.711 [2024-11-08 04:04:23.616988] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.711 [2024-11-08 04:04:23.616994] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.711 [2024-11-08 04:04:23.616997] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.711 [2024-11-08 04:04:23.617001] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4e1d30) 00:20:48.711 [2024-11-08 04:04:23.617006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.711 [2024-11-08 04:04:23.617028] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5404b0, cid 4, qid 0 00:20:48.711 [2024-11-08 04:04:23.617034] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540610, cid 5, qid 0 00:20:48.711 [2024-11-08 04:04:23.617123] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.711 [2024-11-08 04:04:23.617129] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.711 [2024-11-08 04:04:23.617132] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.711 [2024-11-08 04:04:23.617135] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5404b0) on tqpair=0x4e1d30 00:20:48.711 [2024-11-08 04:04:23.617141] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.711 [2024-11-08 04:04:23.617146] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.711 [2024-11-08 04:04:23.617149] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.711 [2024-11-08 04:04:23.617152] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540610) on tqpair=0x4e1d30 00:20:48.711 [2024-11-08 04:04:23.617161] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.711 [2024-11-08 04:04:23.617165] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.711 [2024-11-08 04:04:23.617168] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4e1d30) 00:20:48.711 [2024-11-08 04:04:23.617174] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.711 [2024-11-08 04:04:23.617191] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540610, cid 5, qid 0 00:20:48.711 [2024-11-08 04:04:23.617258] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.711 [2024-11-08 04:04:23.617264] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.711 [2024-11-08 04:04:23.617267] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.711 [2024-11-08 04:04:23.617271] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540610) on tqpair=0x4e1d30 00:20:48.711 [2024-11-08 04:04:23.617279] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.711 [2024-11-08 04:04:23.617283] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.711 [2024-11-08 04:04:23.617286] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4e1d30) 00:20:48.711 [2024-11-08 04:04:23.617292] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.711 [2024-11-08 04:04:23.617308] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540610, cid 5, qid 0 00:20:48.711 [2024-11-08 04:04:23.617370] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.711 [2024-11-08 04:04:23.617376] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.711 [2024-11-08 04:04:23.617379] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.711 [2024-11-08 04:04:23.617382] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540610) on tqpair=0x4e1d30 00:20:48.711 [2024-11-08 04:04:23.617391] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.711 [2024-11-08 04:04:23.617395] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.711 [2024-11-08 04:04:23.617398] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4e1d30) 00:20:48.711 [2024-11-08 04:04:23.617404] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.711 [2024-11-08 04:04:23.617452] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540610, cid 5, qid 0 00:20:48.711 [2024-11-08 04:04:23.617545] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.711 [2024-11-08 04:04:23.617551] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.711 [2024-11-08 04:04:23.617555] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.711 [2024-11-08 04:04:23.617558] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540610) on tqpair=0x4e1d30 00:20:48.711 [2024-11-08 04:04:23.617570] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.711 [2024-11-08 04:04:23.617575] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.711 [2024-11-08 04:04:23.617578] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4e1d30) 00:20:48.711 [2024-11-08 04:04:23.617584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.711 [2024-11-08 04:04:23.617590] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.711 [2024-11-08 04:04:23.617594] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.711 [2024-11-08 04:04:23.617597] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4e1d30) 00:20:48.711 [2024-11-08 04:04:23.617602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.711 [2024-11-08 04:04:23.617609] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.711 [2024-11-08 04:04:23.617612] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.711 [2024-11-08 04:04:23.617615] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x4e1d30) 00:20:48.711 [2024-11-08 04:04:23.617621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.711 [2024-11-08 04:04:23.617627] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.711 [2024-11-08 04:04:23.617631] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.711 [2024-11-08 04:04:23.617634] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x4e1d30) 00:20:48.712 [2024-11-08 04:04:23.617639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.712 [2024-11-08 04:04:23.617659] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540610, cid 5, qid 0 00:20:48.712 [2024-11-08 04:04:23.617666] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5404b0, cid 4, qid 0 00:20:48.712 [2024-11-08 04:04:23.617670] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540770, cid 6, qid 0 00:20:48.712 [2024-11-08 04:04:23.617674] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5408d0, cid 7, qid 0 00:20:48.712 [2024-11-08 04:04:23.617840] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:48.712 [2024-11-08 04:04:23.617846] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:48.712 [2024-11-08 04:04:23.617849] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:48.712 [2024-11-08 04:04:23.617857] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4e1d30): datao=0, datal=8192, cccid=5 00:20:48.712 [2024-11-08 04:04:23.617861] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x540610) on tqpair(0x4e1d30): expected_datao=0, payload_size=8192 00:20:48.712 [2024-11-08 04:04:23.617877] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:48.712 [2024-11-08 04:04:23.617881] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:48.712 [2024-11-08 04:04:23.617886] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:48.712 [2024-11-08 04:04:23.617891] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:48.712 [2024-11-08 04:04:23.617894] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:48.712 [2024-11-08 04:04:23.617897] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4e1d30): datao=0, datal=512, cccid=4 00:20:48.712 [2024-11-08 04:04:23.617901] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5404b0) on tqpair(0x4e1d30): expected_datao=0, payload_size=512 00:20:48.712 [2024-11-08 04:04:23.617907] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:48.712 [2024-11-08 04:04:23.617910] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:48.712 [2024-11-08 04:04:23.617915] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:48.712 [2024-11-08 04:04:23.617920] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:48.712 [2024-11-08 04:04:23.617923] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:48.712 [2024-11-08 04:04:23.617926] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4e1d30): datao=0, datal=512, cccid=6 00:20:48.712 [2024-11-08 04:04:23.617929] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x540770) on tqpair(0x4e1d30): expected_datao=0, payload_size=512 00:20:48.712 [2024-11-08 04:04:23.617935] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:48.712 [2024-11-08 04:04:23.617938] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:48.712 [2024-11-08 04:04:23.617943] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:48.712 [2024-11-08 04:04:23.617948] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:48.712 [2024-11-08 04:04:23.617951] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:48.712 [2024-11-08 04:04:23.617954] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4e1d30): datao=0, datal=4096, cccid=7 00:20:48.712 [2024-11-08 04:04:23.617957] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5408d0) on tqpair(0x4e1d30): expected_datao=0, payload_size=4096 00:20:48.712 [2024-11-08 04:04:23.617963] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:48.712 [2024-11-08 04:04:23.617967] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:48.712 [2024-11-08 04:04:23.617974] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.712 [2024-11-08 04:04:23.617979] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.712 [2024-11-08 04:04:23.617982] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.712 [2024-11-08 04:04:23.617985] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540610) on tqpair=0x4e1d30 00:20:48.712 [2024-11-08 04:04:23.618002] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.712 [2024-11-08 04:04:23.618008] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.712 [2024-11-08 04:04:23.618013] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.712 [2024-11-08 04:04:23.618016] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5404b0) on tqpair=0x4e1d30 00:20:48.712 [2024-11-08 04:04:23.618026] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.712 [2024-11-08 04:04:23.618031] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.712 [2024-11-08 04:04:23.618034] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.712 [2024-11-08 04:04:23.618037] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540770) on tqpair=0x4e1d30 00:20:48.712 [2024-11-08 04:04:23.618043] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.712 ===================================================== 00:20:48.712 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:48.712 ===================================================== 00:20:48.712 Controller Capabilities/Features 00:20:48.712 ================================ 00:20:48.712 Vendor ID: 8086 00:20:48.712 Subsystem Vendor ID: 8086 00:20:48.712 Serial Number: SPDK00000000000001 00:20:48.712 Model Number: SPDK bdev Controller 00:20:48.712 Firmware Version: 24.01.1 00:20:48.712 Recommended Arb Burst: 6 00:20:48.712 IEEE OUI Identifier: e4 d2 5c 00:20:48.712 Multi-path I/O 00:20:48.712 May have multiple subsystem ports: Yes 00:20:48.712 May have multiple controllers: Yes 00:20:48.712 Associated with SR-IOV VF: No 00:20:48.712 Max Data Transfer Size: 131072 00:20:48.712 Max Number of Namespaces: 32 00:20:48.712 Max Number of I/O Queues: 127 00:20:48.712 NVMe Specification Version (VS): 1.3 00:20:48.712 NVMe Specification Version (Identify): 1.3 00:20:48.712 Maximum Queue Entries: 128 00:20:48.712 Contiguous Queues Required: Yes 00:20:48.712 Arbitration Mechanisms Supported 00:20:48.712 Weighted Round Robin: Not Supported 00:20:48.712 Vendor Specific: Not Supported 00:20:48.712 Reset Timeout: 15000 ms 00:20:48.712 Doorbell Stride: 4 bytes 00:20:48.712 NVM Subsystem Reset: Not Supported 00:20:48.712 Command Sets Supported 00:20:48.712 NVM Command Set: Supported 00:20:48.712 Boot Partition: Not Supported 00:20:48.712 Memory Page Size Minimum: 4096 bytes 00:20:48.712 Memory Page Size Maximum: 4096 bytes 00:20:48.712 Persistent Memory Region: Not Supported 00:20:48.712 Optional Asynchronous Events Supported 00:20:48.712 Namespace Attribute Notices: Supported 00:20:48.712 Firmware Activation Notices: Not Supported 00:20:48.712 ANA Change Notices: Not Supported 00:20:48.712 PLE Aggregate Log Change Notices: Not Supported 00:20:48.712 LBA Status Info Alert Notices: Not Supported 00:20:48.712 EGE Aggregate Log Change Notices: Not Supported 00:20:48.712 Normal NVM Subsystem Shutdown event: Not Supported 00:20:48.712 Zone Descriptor Change Notices: Not Supported 00:20:48.712 Discovery Log Change Notices: Not Supported 00:20:48.712 Controller Attributes 00:20:48.712 128-bit Host Identifier: Supported 00:20:48.712 Non-Operational Permissive Mode: Not Supported 00:20:48.712 NVM Sets: Not Supported 00:20:48.712 Read Recovery Levels: Not Supported 00:20:48.712 Endurance Groups: Not Supported 00:20:48.712 Predictable Latency Mode: Not Supported 00:20:48.712 Traffic Based Keep ALive: Not Supported 00:20:48.712 Namespace Granularity: Not Supported 00:20:48.712 SQ Associations: Not Supported 00:20:48.712 UUID List: Not Supported 00:20:48.712 Multi-Domain Subsystem: Not Supported 00:20:48.712 Fixed Capacity Management: Not Supported 00:20:48.712 Variable Capacity Management: Not Supported 00:20:48.712 Delete Endurance Group: Not Supported 00:20:48.712 Delete NVM Set: Not Supported 00:20:48.712 Extended LBA Formats Supported: Not Supported 00:20:48.712 Flexible Data Placement Supported: Not Supported 00:20:48.712 00:20:48.712 Controller Memory Buffer Support 00:20:48.712 ================================ 00:20:48.712 Supported: No 00:20:48.712 00:20:48.712 Persistent Memory Region Support 00:20:48.712 ================================ 00:20:48.712 Supported: No 00:20:48.712 00:20:48.712 Admin Command Set Attributes 00:20:48.712 ============================ 00:20:48.712 Security Send/Receive: Not Supported 00:20:48.712 Format NVM: Not Supported 00:20:48.712 Firmware Activate/Download: Not Supported 00:20:48.712 Namespace Management: Not Supported 00:20:48.712 Device Self-Test: Not Supported 00:20:48.712 Directives: Not Supported 00:20:48.712 NVMe-MI: Not Supported 00:20:48.712 Virtualization Management: Not Supported 00:20:48.712 Doorbell Buffer Config: Not Supported 00:20:48.712 Get LBA Status Capability: Not Supported 00:20:48.712 Command & Feature Lockdown Capability: Not Supported 00:20:48.712 Abort Command Limit: 4 00:20:48.712 Async Event Request Limit: 4 00:20:48.712 Number of Firmware Slots: N/A 00:20:48.712 Firmware Slot 1 Read-Only: N/A 00:20:48.712 Firmware Activation Without Reset: N/A 00:20:48.712 Multiple Update Detection Support: N/A 00:20:48.712 Firmware Update Granularity: No Information Provided 00:20:48.712 Per-Namespace SMART Log: No 00:20:48.712 Asymmetric Namespace Access Log Page: Not Supported 00:20:48.712 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:48.712 Command Effects Log Page: Supported 00:20:48.712 Get Log Page Extended Data: Supported 00:20:48.712 Telemetry Log Pages: Not Supported 00:20:48.712 Persistent Event Log Pages: Not Supported 00:20:48.712 Supported Log Pages Log Page: May Support 00:20:48.712 Commands Supported & Effects Log Page: Not Supported 00:20:48.712 Feature Identifiers & Effects Log Page:May Support 00:20:48.712 NVMe-MI Commands & Effects Log Page: May Support 00:20:48.712 Data Area 4 for Telemetry Log: Not Supported 00:20:48.712 Error Log Page Entries Supported: 128 00:20:48.712 Keep Alive: Supported 00:20:48.713 Keep Alive Granularity: 10000 ms 00:20:48.713 00:20:48.713 NVM Command Set Attributes 00:20:48.713 ========================== 00:20:48.713 Submission Queue Entry Size 00:20:48.713 Max: 64 00:20:48.713 Min: 64 00:20:48.713 Completion Queue Entry Size 00:20:48.713 Max: 16 00:20:48.713 Min: 16 00:20:48.713 Number of Namespaces: 32 00:20:48.713 Compare Command: Supported 00:20:48.713 Write Uncorrectable Command: Not Supported 00:20:48.713 Dataset Management Command: Supported 00:20:48.713 Write Zeroes Command: Supported 00:20:48.713 Set Features Save Field: Not Supported 00:20:48.713 Reservations: Supported 00:20:48.713 Timestamp: Not Supported 00:20:48.713 Copy: Supported 00:20:48.713 Volatile Write Cache: Present 00:20:48.713 Atomic Write Unit (Normal): 1 00:20:48.713 Atomic Write Unit (PFail): 1 00:20:48.713 Atomic Compare & Write Unit: 1 00:20:48.713 Fused Compare & Write: Supported 00:20:48.713 Scatter-Gather List 00:20:48.713 SGL Command Set: Supported 00:20:48.713 SGL Keyed: Supported 00:20:48.713 SGL Bit Bucket Descriptor: Not Supported 00:20:48.713 SGL Metadata Pointer: Not Supported 00:20:48.713 Oversized SGL: Not Supported 00:20:48.713 SGL Metadata Address: Not Supported 00:20:48.713 SGL Offset: Supported 00:20:48.713 Transport SGL Data Block: Not Supported 00:20:48.713 Replay Protected Memory Block: Not Supported 00:20:48.713 00:20:48.713 Firmware Slot Information 00:20:48.713 ========================= 00:20:48.713 Active slot: 1 00:20:48.713 Slot 1 Firmware Revision: 24.01.1 00:20:48.713 00:20:48.713 00:20:48.713 Commands Supported and Effects 00:20:48.713 ============================== 00:20:48.713 Admin Commands 00:20:48.713 -------------- 00:20:48.713 Get Log Page (02h): Supported 00:20:48.713 Identify (06h): Supported 00:20:48.713 Abort (08h): Supported 00:20:48.713 Set Features (09h): Supported 00:20:48.713 Get Features (0Ah): Supported 00:20:48.713 Asynchronous Event Request (0Ch): Supported 00:20:48.713 Keep Alive (18h): Supported 00:20:48.713 I/O Commands 00:20:48.713 ------------ 00:20:48.713 Flush (00h): Supported LBA-Change 00:20:48.713 Write (01h): Supported LBA-Change 00:20:48.713 Read (02h): Supported 00:20:48.713 Compare (05h): Supported 00:20:48.713 Write Zeroes (08h): Supported LBA-Change 00:20:48.713 Dataset Management (09h): Supported LBA-Change 00:20:48.713 Copy (19h): Supported LBA-Change 00:20:48.713 Unknown (79h): Supported LBA-Change 00:20:48.713 Unknown (7Ah): Supported 00:20:48.713 00:20:48.713 Error Log 00:20:48.713 ========= 00:20:48.713 00:20:48.713 Arbitration 00:20:48.713 =========== 00:20:48.713 Arbitration Burst: 1 00:20:48.713 00:20:48.713 Power Management 00:20:48.713 ================ 00:20:48.713 Number of Power States: 1 00:20:48.713 Current Power State: Power State #0 00:20:48.713 Power State #0: 00:20:48.713 Max Power: 0.00 W 00:20:48.713 Non-Operational State: Operational 00:20:48.713 Entry Latency: Not Reported 00:20:48.713 Exit Latency: Not Reported 00:20:48.713 Relative Read Throughput: 0 00:20:48.713 Relative Read Latency: 0 00:20:48.713 Relative Write Throughput: 0 00:20:48.713 Relative Write Latency: 0 00:20:48.713 Idle Power: Not Reported 00:20:48.713 Active Power: Not Reported 00:20:48.713 Non-Operational Permissive Mode: Not Supported 00:20:48.713 00:20:48.713 Health Information 00:20:48.713 ================== 00:20:48.713 Critical Warnings: 00:20:48.713 Available Spare Space: OK 00:20:48.713 Temperature: OK 00:20:48.713 Device Reliability: OK 00:20:48.713 Read Only: No 00:20:48.713 Volatile Memory Backup: OK 00:20:48.713 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:48.713 Temperature Threshold: [2024-11-08 04:04:23.618048] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.713 [2024-11-08 04:04:23.618052] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.713 [2024-11-08 04:04:23.618055] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5408d0) on tqpair=0x4e1d30 00:20:48.713 [2024-11-08 04:04:23.618149] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.713 [2024-11-08 04:04:23.618155] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.713 [2024-11-08 04:04:23.618159] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x4e1d30) 00:20:48.713 [2024-11-08 04:04:23.618165] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.713 [2024-11-08 04:04:23.618186] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5408d0, cid 7, qid 0 00:20:48.713 [2024-11-08 04:04:23.618268] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.713 [2024-11-08 04:04:23.618274] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.713 [2024-11-08 04:04:23.618277] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.713 [2024-11-08 04:04:23.618280] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5408d0) on tqpair=0x4e1d30 00:20:48.713 [2024-11-08 04:04:23.618311] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:48.713 [2024-11-08 04:04:23.618321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.713 [2024-11-08 04:04:23.618327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.713 [2024-11-08 04:04:23.618333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.713 [2024-11-08 04:04:23.618338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.713 [2024-11-08 04:04:23.618345] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.713 [2024-11-08 04:04:23.618349] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.713 [2024-11-08 04:04:23.618352] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.713 [2024-11-08 04:04:23.618358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.713 [2024-11-08 04:04:23.618378] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.713 [2024-11-08 04:04:23.622440] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.713 [2024-11-08 04:04:23.622456] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.713 [2024-11-08 04:04:23.622460] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.713 [2024-11-08 04:04:23.622464] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.713 [2024-11-08 04:04:23.622475] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.713 [2024-11-08 04:04:23.622479] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.713 [2024-11-08 04:04:23.622482] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.713 [2024-11-08 04:04:23.622490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.713 [2024-11-08 04:04:23.622515] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.713 [2024-11-08 04:04:23.622597] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.714 [2024-11-08 04:04:23.622603] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.714 [2024-11-08 04:04:23.622607] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.714 [2024-11-08 04:04:23.622611] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.714 [2024-11-08 04:04:23.622615] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:48.714 [2024-11-08 04:04:23.622620] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:48.714 [2024-11-08 04:04:23.622628] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.714 [2024-11-08 04:04:23.622633] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.714 [2024-11-08 04:04:23.622636] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.714 [2024-11-08 04:04:23.622642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.714 [2024-11-08 04:04:23.622659] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.714 [2024-11-08 04:04:23.622734] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.714 [2024-11-08 04:04:23.622739] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.714 [2024-11-08 04:04:23.622743] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.714 [2024-11-08 04:04:23.622746] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.714 [2024-11-08 04:04:23.622755] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.714 [2024-11-08 04:04:23.622759] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.714 [2024-11-08 04:04:23.622762] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.714 [2024-11-08 04:04:23.622769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.714 [2024-11-08 04:04:23.622785] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.714 [2024-11-08 04:04:23.622864] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.714 [2024-11-08 04:04:23.622870] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.714 [2024-11-08 04:04:23.622873] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.714 [2024-11-08 04:04:23.622876] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.714 [2024-11-08 04:04:23.622886] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.714 [2024-11-08 04:04:23.622890] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.714 [2024-11-08 04:04:23.622893] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.714 [2024-11-08 04:04:23.622899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.714 [2024-11-08 04:04:23.622915] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.714 [2024-11-08 04:04:23.622983] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.714 [2024-11-08 04:04:23.622988] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.714 [2024-11-08 04:04:23.622992] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.714 [2024-11-08 04:04:23.622996] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.714 [2024-11-08 04:04:23.623005] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.714 [2024-11-08 04:04:23.623009] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.714 [2024-11-08 04:04:23.623012] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.714 [2024-11-08 04:04:23.623018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.714 [2024-11-08 04:04:23.623034] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.714 [2024-11-08 04:04:23.623104] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.714 [2024-11-08 04:04:23.623109] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.714 [2024-11-08 04:04:23.623112] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.714 [2024-11-08 04:04:23.623115] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.714 [2024-11-08 04:04:23.623124] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.714 [2024-11-08 04:04:23.623128] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.714 [2024-11-08 04:04:23.623131] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.714 [2024-11-08 04:04:23.623137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.714 [2024-11-08 04:04:23.623153] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.714 [2024-11-08 04:04:23.623226] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.714 [2024-11-08 04:04:23.623231] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.714 [2024-11-08 04:04:23.623235] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.714 [2024-11-08 04:04:23.623238] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.714 [2024-11-08 04:04:23.623247] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.714 [2024-11-08 04:04:23.623251] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.714 [2024-11-08 04:04:23.623254] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.714 [2024-11-08 04:04:23.623261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.714 [2024-11-08 04:04:23.623276] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.714 [2024-11-08 04:04:23.623337] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.714 [2024-11-08 04:04:23.623343] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.714 [2024-11-08 04:04:23.623346] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.714 [2024-11-08 04:04:23.623349] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.714 [2024-11-08 04:04:23.623358] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.714 [2024-11-08 04:04:23.623362] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.714 [2024-11-08 04:04:23.623365] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.714 [2024-11-08 04:04:23.623371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.714 [2024-11-08 04:04:23.623387] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.714 [2024-11-08 04:04:23.623472] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.714 [2024-11-08 04:04:23.623479] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.714 [2024-11-08 04:04:23.623482] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.714 [2024-11-08 04:04:23.623486] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.714 [2024-11-08 04:04:23.623495] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.714 [2024-11-08 04:04:23.623499] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.714 [2024-11-08 04:04:23.623502] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.714 [2024-11-08 04:04:23.623508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.714 [2024-11-08 04:04:23.623526] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.714 [2024-11-08 04:04:23.623584] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.714 [2024-11-08 04:04:23.623596] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.714 [2024-11-08 04:04:23.623600] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.714 [2024-11-08 04:04:23.623603] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.714 [2024-11-08 04:04:23.623613] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.714 [2024-11-08 04:04:23.623617] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.714 [2024-11-08 04:04:23.623620] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.714 [2024-11-08 04:04:23.623627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.714 [2024-11-08 04:04:23.623643] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.714 [2024-11-08 04:04:23.623708] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.714 [2024-11-08 04:04:23.623714] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.714 [2024-11-08 04:04:23.623717] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.714 [2024-11-08 04:04:23.623720] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.714 [2024-11-08 04:04:23.623729] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.714 [2024-11-08 04:04:23.623733] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.714 [2024-11-08 04:04:23.623736] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.714 [2024-11-08 04:04:23.623742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.714 [2024-11-08 04:04:23.623758] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.714 [2024-11-08 04:04:23.623830] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.714 [2024-11-08 04:04:23.623836] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.714 [2024-11-08 04:04:23.623839] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.714 [2024-11-08 04:04:23.623842] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.714 [2024-11-08 04:04:23.623851] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.714 [2024-11-08 04:04:23.623855] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.714 [2024-11-08 04:04:23.623858] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.714 [2024-11-08 04:04:23.623865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.715 [2024-11-08 04:04:23.623880] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.715 [2024-11-08 04:04:23.623960] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.715 [2024-11-08 04:04:23.623965] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.715 [2024-11-08 04:04:23.623969] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.715 [2024-11-08 04:04:23.623972] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.715 [2024-11-08 04:04:23.623980] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.715 [2024-11-08 04:04:23.623985] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.715 [2024-11-08 04:04:23.623988] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.715 [2024-11-08 04:04:23.623994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.715 [2024-11-08 04:04:23.624010] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.715 [2024-11-08 04:04:23.624069] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.715 [2024-11-08 04:04:23.624075] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.715 [2024-11-08 04:04:23.624078] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.715 [2024-11-08 04:04:23.624081] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.715 [2024-11-08 04:04:23.624090] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.715 [2024-11-08 04:04:23.624094] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.715 [2024-11-08 04:04:23.624097] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.715 [2024-11-08 04:04:23.624103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.715 [2024-11-08 04:04:23.624119] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.715 [2024-11-08 04:04:23.624188] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.715 [2024-11-08 04:04:23.624202] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.715 [2024-11-08 04:04:23.624206] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.715 [2024-11-08 04:04:23.624209] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.715 [2024-11-08 04:04:23.624218] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.715 [2024-11-08 04:04:23.624222] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.715 [2024-11-08 04:04:23.624226] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.715 [2024-11-08 04:04:23.624232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.715 [2024-11-08 04:04:23.624249] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.715 [2024-11-08 04:04:23.624320] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.715 [2024-11-08 04:04:23.624331] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.715 [2024-11-08 04:04:23.624334] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.715 [2024-11-08 04:04:23.624338] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.715 [2024-11-08 04:04:23.624347] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.715 [2024-11-08 04:04:23.624351] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.715 [2024-11-08 04:04:23.624355] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.715 [2024-11-08 04:04:23.624362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.715 [2024-11-08 04:04:23.624378] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.715 [2024-11-08 04:04:23.624464] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.715 [2024-11-08 04:04:23.624471] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.715 [2024-11-08 04:04:23.624474] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.715 [2024-11-08 04:04:23.624478] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.715 [2024-11-08 04:04:23.624488] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.715 [2024-11-08 04:04:23.624493] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.715 [2024-11-08 04:04:23.624496] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.715 [2024-11-08 04:04:23.624502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.715 [2024-11-08 04:04:23.624522] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.715 [2024-11-08 04:04:23.624585] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.715 [2024-11-08 04:04:23.624591] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.715 [2024-11-08 04:04:23.624594] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.715 [2024-11-08 04:04:23.624597] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.715 [2024-11-08 04:04:23.624606] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.715 [2024-11-08 04:04:23.624610] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.715 [2024-11-08 04:04:23.624613] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.715 [2024-11-08 04:04:23.624620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.715 [2024-11-08 04:04:23.624637] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.715 [2024-11-08 04:04:23.624692] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.715 [2024-11-08 04:04:23.624698] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.715 [2024-11-08 04:04:23.624701] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.715 [2024-11-08 04:04:23.624704] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.715 [2024-11-08 04:04:23.624713] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.715 [2024-11-08 04:04:23.624717] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.715 [2024-11-08 04:04:23.624720] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.715 [2024-11-08 04:04:23.624726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.715 [2024-11-08 04:04:23.624742] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.715 [2024-11-08 04:04:23.624810] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.715 [2024-11-08 04:04:23.624816] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.715 [2024-11-08 04:04:23.624819] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.715 [2024-11-08 04:04:23.624822] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.715 [2024-11-08 04:04:23.624831] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.715 [2024-11-08 04:04:23.624835] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.715 [2024-11-08 04:04:23.624838] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.715 [2024-11-08 04:04:23.624844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.715 [2024-11-08 04:04:23.624860] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.715 [2024-11-08 04:04:23.624916] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.715 [2024-11-08 04:04:23.624921] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.715 [2024-11-08 04:04:23.624925] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.715 [2024-11-08 04:04:23.624928] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.715 [2024-11-08 04:04:23.624936] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.715 [2024-11-08 04:04:23.624941] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.715 [2024-11-08 04:04:23.624944] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.715 [2024-11-08 04:04:23.624950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.715 [2024-11-08 04:04:23.624966] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.715 [2024-11-08 04:04:23.625028] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.715 [2024-11-08 04:04:23.625033] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.715 [2024-11-08 04:04:23.625037] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.715 [2024-11-08 04:04:23.625040] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.715 [2024-11-08 04:04:23.625048] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.715 [2024-11-08 04:04:23.625053] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.715 [2024-11-08 04:04:23.625056] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.715 [2024-11-08 04:04:23.625062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.715 [2024-11-08 04:04:23.625078] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.715 [2024-11-08 04:04:23.625139] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.715 [2024-11-08 04:04:23.625145] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.715 [2024-11-08 04:04:23.625148] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.715 [2024-11-08 04:04:23.625151] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.715 [2024-11-08 04:04:23.625160] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.715 [2024-11-08 04:04:23.625164] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.715 [2024-11-08 04:04:23.625167] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.715 [2024-11-08 04:04:23.625174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.715 [2024-11-08 04:04:23.625189] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.716 [2024-11-08 04:04:23.625255] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.716 [2024-11-08 04:04:23.625260] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.716 [2024-11-08 04:04:23.625263] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.716 [2024-11-08 04:04:23.625267] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.716 [2024-11-08 04:04:23.625275] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.716 [2024-11-08 04:04:23.625279] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.716 [2024-11-08 04:04:23.625282] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.716 [2024-11-08 04:04:23.625289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.716 [2024-11-08 04:04:23.625304] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.716 [2024-11-08 04:04:23.625374] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.716 [2024-11-08 04:04:23.625384] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.716 [2024-11-08 04:04:23.625388] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.716 [2024-11-08 04:04:23.625391] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.716 [2024-11-08 04:04:23.625400] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.716 [2024-11-08 04:04:23.625405] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.716 [2024-11-08 04:04:23.625408] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.716 [2024-11-08 04:04:23.625426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.716 [2024-11-08 04:04:23.625468] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.716 [2024-11-08 04:04:23.625541] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.716 [2024-11-08 04:04:23.625547] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.716 [2024-11-08 04:04:23.625550] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.716 [2024-11-08 04:04:23.625553] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.716 [2024-11-08 04:04:23.625562] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.716 [2024-11-08 04:04:23.625567] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.716 [2024-11-08 04:04:23.625570] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.716 [2024-11-08 04:04:23.625576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.716 [2024-11-08 04:04:23.625593] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.716 [2024-11-08 04:04:23.625671] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.716 [2024-11-08 04:04:23.625677] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.716 [2024-11-08 04:04:23.625680] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.716 [2024-11-08 04:04:23.625683] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.716 [2024-11-08 04:04:23.625692] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.716 [2024-11-08 04:04:23.625696] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.716 [2024-11-08 04:04:23.625699] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.716 [2024-11-08 04:04:23.625705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.716 [2024-11-08 04:04:23.625721] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.716 [2024-11-08 04:04:23.625801] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.716 [2024-11-08 04:04:23.625807] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.716 [2024-11-08 04:04:23.625810] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.716 [2024-11-08 04:04:23.625813] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.716 [2024-11-08 04:04:23.625822] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.716 [2024-11-08 04:04:23.625826] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.716 [2024-11-08 04:04:23.625829] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.716 [2024-11-08 04:04:23.625835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.716 [2024-11-08 04:04:23.625854] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.716 [2024-11-08 04:04:23.625916] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.716 [2024-11-08 04:04:23.625921] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.716 [2024-11-08 04:04:23.625924] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.716 [2024-11-08 04:04:23.625928] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.716 [2024-11-08 04:04:23.625936] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.716 [2024-11-08 04:04:23.625940] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.716 [2024-11-08 04:04:23.625943] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.716 [2024-11-08 04:04:23.625950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.716 [2024-11-08 04:04:23.625966] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.716 [2024-11-08 04:04:23.626029] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.716 [2024-11-08 04:04:23.626035] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.716 [2024-11-08 04:04:23.626038] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.716 [2024-11-08 04:04:23.626041] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.716 [2024-11-08 04:04:23.626050] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.716 [2024-11-08 04:04:23.626054] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.716 [2024-11-08 04:04:23.626057] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.716 [2024-11-08 04:04:23.626063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.716 [2024-11-08 04:04:23.626078] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.716 [2024-11-08 04:04:23.626155] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.716 [2024-11-08 04:04:23.626165] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.716 [2024-11-08 04:04:23.626169] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.716 [2024-11-08 04:04:23.626172] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.716 [2024-11-08 04:04:23.626182] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.716 [2024-11-08 04:04:23.626186] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.716 [2024-11-08 04:04:23.626189] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.716 [2024-11-08 04:04:23.626195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.716 [2024-11-08 04:04:23.626212] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.716 [2024-11-08 04:04:23.626275] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.716 [2024-11-08 04:04:23.626280] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.716 [2024-11-08 04:04:23.626283] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.716 [2024-11-08 04:04:23.626287] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.716 [2024-11-08 04:04:23.626295] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.716 [2024-11-08 04:04:23.626300] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.716 [2024-11-08 04:04:23.626303] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.716 [2024-11-08 04:04:23.626309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.716 [2024-11-08 04:04:23.626326] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.716 [2024-11-08 04:04:23.626397] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.716 [2024-11-08 04:04:23.626407] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.716 [2024-11-08 04:04:23.626411] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.716 [2024-11-08 04:04:23.626426] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.716 [2024-11-08 04:04:23.626449] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.716 [2024-11-08 04:04:23.626453] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.716 [2024-11-08 04:04:23.626457] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.716 [2024-11-08 04:04:23.626463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.716 [2024-11-08 04:04:23.626481] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.716 [2024-11-08 04:04:23.626556] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.716 [2024-11-08 04:04:23.626567] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.716 [2024-11-08 04:04:23.626570] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.716 [2024-11-08 04:04:23.626574] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.716 [2024-11-08 04:04:23.626583] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.716 [2024-11-08 04:04:23.626587] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.716 [2024-11-08 04:04:23.626591] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.716 [2024-11-08 04:04:23.626597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.716 [2024-11-08 04:04:23.626613] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.716 [2024-11-08 04:04:23.626697] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.716 [2024-11-08 04:04:23.626703] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.717 [2024-11-08 04:04:23.626706] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.717 [2024-11-08 04:04:23.626709] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.717 [2024-11-08 04:04:23.626718] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.717 [2024-11-08 04:04:23.626722] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.717 [2024-11-08 04:04:23.626725] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.717 [2024-11-08 04:04:23.626731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.717 [2024-11-08 04:04:23.626747] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.717 [2024-11-08 04:04:23.626831] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.717 [2024-11-08 04:04:23.626837] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.717 [2024-11-08 04:04:23.626840] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.717 [2024-11-08 04:04:23.626844] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.717 [2024-11-08 04:04:23.626852] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.717 [2024-11-08 04:04:23.626856] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.717 [2024-11-08 04:04:23.626859] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.717 [2024-11-08 04:04:23.626865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.717 [2024-11-08 04:04:23.626881] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.717 [2024-11-08 04:04:23.626946] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.717 [2024-11-08 04:04:23.626951] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.717 [2024-11-08 04:04:23.626954] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.717 [2024-11-08 04:04:23.626957] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.717 [2024-11-08 04:04:23.626966] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.717 [2024-11-08 04:04:23.626970] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.717 [2024-11-08 04:04:23.626973] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.717 [2024-11-08 04:04:23.626979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.717 [2024-11-08 04:04:23.626995] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.717 [2024-11-08 04:04:23.627058] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.717 [2024-11-08 04:04:23.627064] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.717 [2024-11-08 04:04:23.627067] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.717 [2024-11-08 04:04:23.627070] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.717 [2024-11-08 04:04:23.627079] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.717 [2024-11-08 04:04:23.627083] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.717 [2024-11-08 04:04:23.627086] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.717 [2024-11-08 04:04:23.627092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.717 [2024-11-08 04:04:23.627112] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.717 [2024-11-08 04:04:23.627185] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.717 [2024-11-08 04:04:23.627196] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.717 [2024-11-08 04:04:23.627199] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.717 [2024-11-08 04:04:23.627203] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.717 [2024-11-08 04:04:23.627212] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.717 [2024-11-08 04:04:23.627216] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.717 [2024-11-08 04:04:23.627219] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.717 [2024-11-08 04:04:23.627225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.717 [2024-11-08 04:04:23.627242] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.717 [2024-11-08 04:04:23.627310] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.717 [2024-11-08 04:04:23.627315] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.717 [2024-11-08 04:04:23.627318] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.717 [2024-11-08 04:04:23.627322] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.717 [2024-11-08 04:04:23.627330] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.717 [2024-11-08 04:04:23.627334] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.717 [2024-11-08 04:04:23.627337] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.717 [2024-11-08 04:04:23.627343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.717 [2024-11-08 04:04:23.627359] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.717 [2024-11-08 04:04:23.627428] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.717 [2024-11-08 04:04:23.627442] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.717 [2024-11-08 04:04:23.627445] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.717 [2024-11-08 04:04:23.627448] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.717 [2024-11-08 04:04:23.627457] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.717 [2024-11-08 04:04:23.627461] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.717 [2024-11-08 04:04:23.627464] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.717 [2024-11-08 04:04:23.627471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.717 [2024-11-08 04:04:23.627488] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.717 [2024-11-08 04:04:23.627558] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.717 [2024-11-08 04:04:23.627564] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.717 [2024-11-08 04:04:23.627566] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.717 [2024-11-08 04:04:23.627570] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.717 [2024-11-08 04:04:23.627578] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.717 [2024-11-08 04:04:23.627583] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.717 [2024-11-08 04:04:23.627586] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.717 [2024-11-08 04:04:23.627592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.717 [2024-11-08 04:04:23.627608] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.717 [2024-11-08 04:04:23.627672] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.717 [2024-11-08 04:04:23.627678] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.717 [2024-11-08 04:04:23.627681] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.717 [2024-11-08 04:04:23.627684] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.717 [2024-11-08 04:04:23.627693] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.717 [2024-11-08 04:04:23.627697] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.717 [2024-11-08 04:04:23.627700] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.717 [2024-11-08 04:04:23.627707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.717 [2024-11-08 04:04:23.627723] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.717 [2024-11-08 04:04:23.627787] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.717 [2024-11-08 04:04:23.627793] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.717 [2024-11-08 04:04:23.627796] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.717 [2024-11-08 04:04:23.627801] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.717 [2024-11-08 04:04:23.627809] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.717 [2024-11-08 04:04:23.627814] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.717 [2024-11-08 04:04:23.627817] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.717 [2024-11-08 04:04:23.627839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.717 [2024-11-08 04:04:23.627856] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.717 [2024-11-08 04:04:23.627928] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.717 [2024-11-08 04:04:23.627942] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.717 [2024-11-08 04:04:23.627946] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.717 [2024-11-08 04:04:23.627950] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.717 [2024-11-08 04:04:23.627959] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.717 [2024-11-08 04:04:23.627964] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.717 [2024-11-08 04:04:23.627967] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.717 [2024-11-08 04:04:23.627974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.717 [2024-11-08 04:04:23.627991] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.717 [2024-11-08 04:04:23.628066] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.717 [2024-11-08 04:04:23.628076] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.717 [2024-11-08 04:04:23.628080] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.717 [2024-11-08 04:04:23.628084] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.717 [2024-11-08 04:04:23.628093] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.717 [2024-11-08 04:04:23.628099] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.717 [2024-11-08 04:04:23.628102] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.717 [2024-11-08 04:04:23.628108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.718 [2024-11-08 04:04:23.628126] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.718 [2024-11-08 04:04:23.628193] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.718 [2024-11-08 04:04:23.628199] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.718 [2024-11-08 04:04:23.628203] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.718 [2024-11-08 04:04:23.628206] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.718 [2024-11-08 04:04:23.628216] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.718 [2024-11-08 04:04:23.628220] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.718 [2024-11-08 04:04:23.628223] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.718 [2024-11-08 04:04:23.628230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.718 [2024-11-08 04:04:23.628246] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.718 [2024-11-08 04:04:23.628308] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.718 [2024-11-08 04:04:23.628314] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.718 [2024-11-08 04:04:23.628317] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.718 [2024-11-08 04:04:23.628321] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.718 [2024-11-08 04:04:23.628330] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.718 [2024-11-08 04:04:23.628334] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.718 [2024-11-08 04:04:23.628337] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.718 [2024-11-08 04:04:23.628344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.718 [2024-11-08 04:04:23.628362] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.718 [2024-11-08 04:04:23.628457] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.718 [2024-11-08 04:04:23.628465] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.718 [2024-11-08 04:04:23.628468] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.718 [2024-11-08 04:04:23.628472] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.718 [2024-11-08 04:04:23.628481] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.718 [2024-11-08 04:04:23.628486] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.718 [2024-11-08 04:04:23.628489] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.718 [2024-11-08 04:04:23.628496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.718 [2024-11-08 04:04:23.628514] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.718 [2024-11-08 04:04:23.628581] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.718 [2024-11-08 04:04:23.628587] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.718 [2024-11-08 04:04:23.628590] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.718 [2024-11-08 04:04:23.628593] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.718 [2024-11-08 04:04:23.628603] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.718 [2024-11-08 04:04:23.628607] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.718 [2024-11-08 04:04:23.628610] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.718 [2024-11-08 04:04:23.628617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.718 [2024-11-08 04:04:23.628634] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.718 [2024-11-08 04:04:23.628696] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.718 [2024-11-08 04:04:23.628702] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.718 [2024-11-08 04:04:23.628705] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.718 [2024-11-08 04:04:23.628709] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.718 [2024-11-08 04:04:23.628718] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.718 [2024-11-08 04:04:23.628722] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.718 [2024-11-08 04:04:23.628726] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.718 [2024-11-08 04:04:23.628732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.718 [2024-11-08 04:04:23.628749] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.718 [2024-11-08 04:04:23.628827] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.718 [2024-11-08 04:04:23.628833] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.718 [2024-11-08 04:04:23.628836] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.718 [2024-11-08 04:04:23.628839] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.718 [2024-11-08 04:04:23.628848] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.718 [2024-11-08 04:04:23.628853] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.718 [2024-11-08 04:04:23.628856] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.718 [2024-11-08 04:04:23.628863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.718 [2024-11-08 04:04:23.628879] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.718 [2024-11-08 04:04:23.628944] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.718 [2024-11-08 04:04:23.628950] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.718 [2024-11-08 04:04:23.628954] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.718 [2024-11-08 04:04:23.628957] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.718 [2024-11-08 04:04:23.628966] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.718 [2024-11-08 04:04:23.628971] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.718 [2024-11-08 04:04:23.628974] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.718 [2024-11-08 04:04:23.628981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.718 [2024-11-08 04:04:23.628997] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.718 [2024-11-08 04:04:23.629063] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.718 [2024-11-08 04:04:23.629073] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.718 [2024-11-08 04:04:23.629077] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.718 [2024-11-08 04:04:23.629081] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.718 [2024-11-08 04:04:23.629090] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.718 [2024-11-08 04:04:23.629095] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.718 [2024-11-08 04:04:23.629098] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.718 [2024-11-08 04:04:23.629105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.718 [2024-11-08 04:04:23.629122] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.718 [2024-11-08 04:04:23.629182] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.718 [2024-11-08 04:04:23.629189] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.718 [2024-11-08 04:04:23.629193] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.718 [2024-11-08 04:04:23.629196] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.718 [2024-11-08 04:04:23.629206] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.718 [2024-11-08 04:04:23.629211] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.718 [2024-11-08 04:04:23.629214] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.718 [2024-11-08 04:04:23.629220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.718 [2024-11-08 04:04:23.629237] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.718 [2024-11-08 04:04:23.629298] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.718 [2024-11-08 04:04:23.629308] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.718 [2024-11-08 04:04:23.629312] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.718 [2024-11-08 04:04:23.629315] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.718 [2024-11-08 04:04:23.629325] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.718 [2024-11-08 04:04:23.629329] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.718 [2024-11-08 04:04:23.629333] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.718 [2024-11-08 04:04:23.629339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.719 [2024-11-08 04:04:23.629356] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.719 [2024-11-08 04:04:23.629426] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.719 [2024-11-08 04:04:23.629433] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.719 [2024-11-08 04:04:23.629447] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.719 [2024-11-08 04:04:23.629451] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.719 [2024-11-08 04:04:23.629461] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.719 [2024-11-08 04:04:23.629469] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.719 [2024-11-08 04:04:23.629472] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.719 [2024-11-08 04:04:23.629479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.719 [2024-11-08 04:04:23.629498] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.719 [2024-11-08 04:04:23.629568] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.719 [2024-11-08 04:04:23.629574] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.719 [2024-11-08 04:04:23.629577] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.719 [2024-11-08 04:04:23.629581] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.719 [2024-11-08 04:04:23.629590] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.719 [2024-11-08 04:04:23.629595] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.719 [2024-11-08 04:04:23.629598] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.719 [2024-11-08 04:04:23.629605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.719 [2024-11-08 04:04:23.629622] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.719 [2024-11-08 04:04:23.629698] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.719 [2024-11-08 04:04:23.629709] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.719 [2024-11-08 04:04:23.629713] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.719 [2024-11-08 04:04:23.629716] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.719 [2024-11-08 04:04:23.629726] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.719 [2024-11-08 04:04:23.629731] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.719 [2024-11-08 04:04:23.629734] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.719 [2024-11-08 04:04:23.629741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.719 [2024-11-08 04:04:23.629759] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.719 [2024-11-08 04:04:23.629858] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.719 [2024-11-08 04:04:23.629869] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.719 [2024-11-08 04:04:23.629873] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.719 [2024-11-08 04:04:23.629876] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.719 [2024-11-08 04:04:23.629886] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.719 [2024-11-08 04:04:23.629891] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.719 [2024-11-08 04:04:23.629894] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.719 [2024-11-08 04:04:23.629901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.719 [2024-11-08 04:04:23.629918] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.719 [2024-11-08 04:04:23.629978] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.719 [2024-11-08 04:04:23.629984] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.719 [2024-11-08 04:04:23.629987] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.719 [2024-11-08 04:04:23.629990] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.719 [2024-11-08 04:04:23.630000] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.719 [2024-11-08 04:04:23.630004] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.719 [2024-11-08 04:04:23.630007] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.719 [2024-11-08 04:04:23.630014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.719 [2024-11-08 04:04:23.630030] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.719 [2024-11-08 04:04:23.630099] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.719 [2024-11-08 04:04:23.630106] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.719 [2024-11-08 04:04:23.630109] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.719 [2024-11-08 04:04:23.630113] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.719 [2024-11-08 04:04:23.630122] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.719 [2024-11-08 04:04:23.630127] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.719 [2024-11-08 04:04:23.630130] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.719 [2024-11-08 04:04:23.630137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.719 [2024-11-08 04:04:23.630153] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.719 [2024-11-08 04:04:23.630223] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.719 [2024-11-08 04:04:23.630234] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.719 [2024-11-08 04:04:23.630237] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.719 [2024-11-08 04:04:23.630241] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.719 [2024-11-08 04:04:23.630251] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.719 [2024-11-08 04:04:23.630256] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.719 [2024-11-08 04:04:23.630260] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.719 [2024-11-08 04:04:23.630266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.719 [2024-11-08 04:04:23.630283] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.719 [2024-11-08 04:04:23.630340] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.719 [2024-11-08 04:04:23.630350] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.719 [2024-11-08 04:04:23.630354] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.719 [2024-11-08 04:04:23.630357] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.719 [2024-11-08 04:04:23.630367] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.719 [2024-11-08 04:04:23.630371] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.719 [2024-11-08 04:04:23.630375] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.719 [2024-11-08 04:04:23.630381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.719 [2024-11-08 04:04:23.630398] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.719 [2024-11-08 04:04:23.634429] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.719 [2024-11-08 04:04:23.634445] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.719 [2024-11-08 04:04:23.634450] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.719 [2024-11-08 04:04:23.634453] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.719 [2024-11-08 04:04:23.634466] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.719 [2024-11-08 04:04:23.634471] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.719 [2024-11-08 04:04:23.634475] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e1d30) 00:20:48.719 [2024-11-08 04:04:23.634482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.719 [2024-11-08 04:04:23.634505] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x540350, cid 3, qid 0 00:20:48.719 [2024-11-08 04:04:23.634582] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.719 [2024-11-08 04:04:23.634588] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.719 [2024-11-08 04:04:23.634592] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.719 [2024-11-08 04:04:23.634595] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x540350) on tqpair=0x4e1d30 00:20:48.719 [2024-11-08 04:04:23.634602] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 11 milliseconds 00:20:48.719 0 Kelvin (-273 Celsius) 00:20:48.719 Available Spare: 0% 00:20:48.719 Available Spare Threshold: 0% 00:20:48.719 Life Percentage Used: 0% 00:20:48.719 Data Units Read: 0 00:20:48.719 Data Units Written: 0 00:20:48.719 Host Read Commands: 0 00:20:48.719 Host Write Commands: 0 00:20:48.719 Controller Busy Time: 0 minutes 00:20:48.719 Power Cycles: 0 00:20:48.719 Power On Hours: 0 hours 00:20:48.719 Unsafe Shutdowns: 0 00:20:48.719 Unrecoverable Media Errors: 0 00:20:48.719 Lifetime Error Log Entries: 0 00:20:48.719 Warning Temperature Time: 0 minutes 00:20:48.719 Critical Temperature Time: 0 minutes 00:20:48.719 00:20:48.719 Number of Queues 00:20:48.720 ================ 00:20:48.720 Number of I/O Submission Queues: 127 00:20:48.720 Number of I/O Completion Queues: 127 00:20:48.720 00:20:48.720 Active Namespaces 00:20:48.720 ================= 00:20:48.720 Namespace ID:1 00:20:48.720 Error Recovery Timeout: Unlimited 00:20:48.720 Command Set Identifier: NVM (00h) 00:20:48.720 Deallocate: Supported 00:20:48.720 Deallocated/Unwritten Error: Not Supported 00:20:48.720 Deallocated Read Value: Unknown 00:20:48.720 Deallocate in Write Zeroes: Not Supported 00:20:48.720 Deallocated Guard Field: 0xFFFF 00:20:48.720 Flush: Supported 00:20:48.720 Reservation: Supported 00:20:48.720 Namespace Sharing Capabilities: Multiple Controllers 00:20:48.720 Size (in LBAs): 131072 (0GiB) 00:20:48.720 Capacity (in LBAs): 131072 (0GiB) 00:20:48.720 Utilization (in LBAs): 131072 (0GiB) 00:20:48.720 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:48.720 EUI64: ABCDEF0123456789 00:20:48.720 UUID: bd7e30c0-c074-4703-8fe6-39d9f9bf5cf4 00:20:48.720 Thin Provisioning: Not Supported 00:20:48.720 Per-NS Atomic Units: Yes 00:20:48.720 Atomic Boundary Size (Normal): 0 00:20:48.720 Atomic Boundary Size (PFail): 0 00:20:48.720 Atomic Boundary Offset: 0 00:20:48.720 Maximum Single Source Range Length: 65535 00:20:48.720 Maximum Copy Length: 65535 00:20:48.720 Maximum Source Range Count: 1 00:20:48.720 NGUID/EUI64 Never Reused: No 00:20:48.720 Namespace Write Protected: No 00:20:48.720 Number of LBA Formats: 1 00:20:48.720 Current LBA Format: LBA Format #00 00:20:48.720 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:48.720 00:20:48.720 04:04:23 -- host/identify.sh@51 -- # sync 00:20:48.720 04:04:23 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:48.720 04:04:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.720 04:04:23 -- common/autotest_common.sh@10 -- # set +x 00:20:48.720 04:04:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.720 04:04:23 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:48.720 04:04:23 -- host/identify.sh@56 -- # nvmftestfini 00:20:48.720 04:04:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:48.720 04:04:23 -- nvmf/common.sh@116 -- # sync 00:20:48.720 04:04:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:48.720 04:04:23 -- nvmf/common.sh@119 -- # set +e 00:20:48.720 04:04:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:48.720 04:04:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:48.720 rmmod nvme_tcp 00:20:48.720 rmmod nvme_fabrics 00:20:48.720 rmmod nvme_keyring 00:20:48.720 04:04:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:48.720 04:04:23 -- nvmf/common.sh@123 -- # set -e 00:20:48.720 04:04:23 -- nvmf/common.sh@124 -- # return 0 00:20:48.720 04:04:23 -- nvmf/common.sh@477 -- # '[' -n 82920 ']' 00:20:48.720 04:04:23 -- nvmf/common.sh@478 -- # killprocess 82920 00:20:48.720 04:04:23 -- common/autotest_common.sh@936 -- # '[' -z 82920 ']' 00:20:48.720 04:04:23 -- common/autotest_common.sh@940 -- # kill -0 82920 00:20:48.720 04:04:23 -- common/autotest_common.sh@941 -- # uname 00:20:48.720 04:04:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:48.720 04:04:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82920 00:20:48.978 04:04:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:48.978 04:04:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:48.978 killing process with pid 82920 00:20:48.978 04:04:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82920' 00:20:48.978 04:04:23 -- common/autotest_common.sh@955 -- # kill 82920 00:20:48.978 [2024-11-08 04:04:23.835180] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:48.978 04:04:23 -- common/autotest_common.sh@960 -- # wait 82920 00:20:49.236 04:04:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:49.236 04:04:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:49.236 04:04:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:49.236 04:04:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:49.236 04:04:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:49.236 04:04:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.236 04:04:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:49.236 04:04:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:49.236 04:04:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:49.236 00:20:49.236 real 0m2.886s 00:20:49.236 user 0m7.845s 00:20:49.236 sys 0m0.733s 00:20:49.236 04:04:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:49.236 04:04:24 -- common/autotest_common.sh@10 -- # set +x 00:20:49.236 ************************************ 00:20:49.236 END TEST nvmf_identify 00:20:49.236 ************************************ 00:20:49.236 04:04:24 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:49.236 04:04:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:49.236 04:04:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:49.236 04:04:24 -- common/autotest_common.sh@10 -- # set +x 00:20:49.236 ************************************ 00:20:49.236 START TEST nvmf_perf 00:20:49.236 ************************************ 00:20:49.236 04:04:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:49.236 * Looking for test storage... 00:20:49.236 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:49.236 04:04:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:49.236 04:04:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:49.236 04:04:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:49.496 04:04:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:49.496 04:04:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:49.496 04:04:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:49.496 04:04:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:49.496 04:04:24 -- scripts/common.sh@335 -- # IFS=.-: 00:20:49.496 04:04:24 -- scripts/common.sh@335 -- # read -ra ver1 00:20:49.496 04:04:24 -- scripts/common.sh@336 -- # IFS=.-: 00:20:49.496 04:04:24 -- scripts/common.sh@336 -- # read -ra ver2 00:20:49.496 04:04:24 -- scripts/common.sh@337 -- # local 'op=<' 00:20:49.496 04:04:24 -- scripts/common.sh@339 -- # ver1_l=2 00:20:49.496 04:04:24 -- scripts/common.sh@340 -- # ver2_l=1 00:20:49.496 04:04:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:49.496 04:04:24 -- scripts/common.sh@343 -- # case "$op" in 00:20:49.496 04:04:24 -- scripts/common.sh@344 -- # : 1 00:20:49.496 04:04:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:49.496 04:04:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:49.496 04:04:24 -- scripts/common.sh@364 -- # decimal 1 00:20:49.496 04:04:24 -- scripts/common.sh@352 -- # local d=1 00:20:49.496 04:04:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:49.496 04:04:24 -- scripts/common.sh@354 -- # echo 1 00:20:49.496 04:04:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:49.496 04:04:24 -- scripts/common.sh@365 -- # decimal 2 00:20:49.496 04:04:24 -- scripts/common.sh@352 -- # local d=2 00:20:49.496 04:04:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:49.496 04:04:24 -- scripts/common.sh@354 -- # echo 2 00:20:49.496 04:04:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:49.496 04:04:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:49.496 04:04:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:49.496 04:04:24 -- scripts/common.sh@367 -- # return 0 00:20:49.496 04:04:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:49.496 04:04:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:49.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.496 --rc genhtml_branch_coverage=1 00:20:49.496 --rc genhtml_function_coverage=1 00:20:49.496 --rc genhtml_legend=1 00:20:49.496 --rc geninfo_all_blocks=1 00:20:49.496 --rc geninfo_unexecuted_blocks=1 00:20:49.496 00:20:49.496 ' 00:20:49.496 04:04:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:49.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.496 --rc genhtml_branch_coverage=1 00:20:49.496 --rc genhtml_function_coverage=1 00:20:49.496 --rc genhtml_legend=1 00:20:49.496 --rc geninfo_all_blocks=1 00:20:49.496 --rc geninfo_unexecuted_blocks=1 00:20:49.496 00:20:49.496 ' 00:20:49.496 04:04:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:49.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.496 --rc genhtml_branch_coverage=1 00:20:49.496 --rc genhtml_function_coverage=1 00:20:49.496 --rc genhtml_legend=1 00:20:49.496 --rc geninfo_all_blocks=1 00:20:49.496 --rc geninfo_unexecuted_blocks=1 00:20:49.496 00:20:49.496 ' 00:20:49.496 04:04:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:49.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.496 --rc genhtml_branch_coverage=1 00:20:49.496 --rc genhtml_function_coverage=1 00:20:49.496 --rc genhtml_legend=1 00:20:49.496 --rc geninfo_all_blocks=1 00:20:49.496 --rc geninfo_unexecuted_blocks=1 00:20:49.496 00:20:49.496 ' 00:20:49.496 04:04:24 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:49.496 04:04:24 -- nvmf/common.sh@7 -- # uname -s 00:20:49.496 04:04:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:49.496 04:04:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:49.496 04:04:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:49.496 04:04:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:49.496 04:04:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:49.496 04:04:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:49.496 04:04:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:49.496 04:04:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:49.496 04:04:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:49.496 04:04:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:49.496 04:04:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:20:49.496 04:04:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:20:49.496 04:04:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:49.496 04:04:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:49.496 04:04:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:49.496 04:04:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:49.496 04:04:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:49.496 04:04:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:49.496 04:04:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:49.496 04:04:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.496 04:04:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.496 04:04:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.496 04:04:24 -- paths/export.sh@5 -- # export PATH 00:20:49.496 04:04:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.496 04:04:24 -- nvmf/common.sh@46 -- # : 0 00:20:49.496 04:04:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:49.496 04:04:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:49.496 04:04:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:49.496 04:04:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:49.496 04:04:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:49.496 04:04:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:49.496 04:04:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:49.496 04:04:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:49.496 04:04:24 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:49.496 04:04:24 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:49.496 04:04:24 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:49.496 04:04:24 -- host/perf.sh@17 -- # nvmftestinit 00:20:49.496 04:04:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:49.496 04:04:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:49.496 04:04:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:49.496 04:04:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:49.496 04:04:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:49.496 04:04:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.496 04:04:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:49.496 04:04:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:49.496 04:04:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:49.496 04:04:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:49.496 04:04:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:49.496 04:04:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:49.496 04:04:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:49.496 04:04:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:49.496 04:04:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:49.496 04:04:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:49.496 04:04:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:49.496 04:04:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:49.496 04:04:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:49.496 04:04:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:49.496 04:04:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:49.496 04:04:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:49.496 04:04:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:49.496 04:04:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:49.496 04:04:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:49.496 04:04:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:49.496 04:04:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:49.496 04:04:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:49.496 Cannot find device "nvmf_tgt_br" 00:20:49.496 04:04:24 -- nvmf/common.sh@154 -- # true 00:20:49.496 04:04:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:49.496 Cannot find device "nvmf_tgt_br2" 00:20:49.496 04:04:24 -- nvmf/common.sh@155 -- # true 00:20:49.496 04:04:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:49.496 04:04:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:49.496 Cannot find device "nvmf_tgt_br" 00:20:49.496 04:04:24 -- nvmf/common.sh@157 -- # true 00:20:49.497 04:04:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:49.497 Cannot find device "nvmf_tgt_br2" 00:20:49.497 04:04:24 -- nvmf/common.sh@158 -- # true 00:20:49.497 04:04:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:49.497 04:04:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:49.497 04:04:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:49.497 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:49.497 04:04:24 -- nvmf/common.sh@161 -- # true 00:20:49.497 04:04:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:49.497 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:49.497 04:04:24 -- nvmf/common.sh@162 -- # true 00:20:49.497 04:04:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:49.497 04:04:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:49.755 04:04:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:49.755 04:04:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:49.755 04:04:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:49.755 04:04:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:49.755 04:04:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:49.755 04:04:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:49.755 04:04:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:49.755 04:04:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:49.755 04:04:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:49.755 04:04:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:49.755 04:04:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:49.755 04:04:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:49.755 04:04:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:49.755 04:04:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:49.755 04:04:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:49.755 04:04:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:49.755 04:04:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:49.755 04:04:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:49.755 04:04:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:49.755 04:04:24 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:49.755 04:04:24 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:49.755 04:04:24 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:49.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:49.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:20:49.755 00:20:49.755 --- 10.0.0.2 ping statistics --- 00:20:49.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.755 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:20:49.755 04:04:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:49.755 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:49.755 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:20:49.755 00:20:49.755 --- 10.0.0.3 ping statistics --- 00:20:49.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.755 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:20:49.755 04:04:24 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:49.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:49.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:20:49.755 00:20:49.756 --- 10.0.0.1 ping statistics --- 00:20:49.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.756 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:20:49.756 04:04:24 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:49.756 04:04:24 -- nvmf/common.sh@421 -- # return 0 00:20:49.756 04:04:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:49.756 04:04:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:49.756 04:04:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:49.756 04:04:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:49.756 04:04:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:49.756 04:04:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:49.756 04:04:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:49.756 04:04:24 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:49.756 04:04:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:49.756 04:04:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:49.756 04:04:24 -- common/autotest_common.sh@10 -- # set +x 00:20:49.756 04:04:24 -- nvmf/common.sh@469 -- # nvmfpid=83156 00:20:49.756 04:04:24 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:49.756 04:04:24 -- nvmf/common.sh@470 -- # waitforlisten 83156 00:20:49.756 04:04:24 -- common/autotest_common.sh@829 -- # '[' -z 83156 ']' 00:20:49.756 04:04:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.756 04:04:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:49.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.756 04:04:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.756 04:04:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:49.756 04:04:24 -- common/autotest_common.sh@10 -- # set +x 00:20:50.014 [2024-11-08 04:04:24.896390] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:50.014 [2024-11-08 04:04:24.896696] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:50.014 [2024-11-08 04:04:25.034904] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:50.272 [2024-11-08 04:04:25.142582] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:50.272 [2024-11-08 04:04:25.143148] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:50.272 [2024-11-08 04:04:25.143253] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:50.272 [2024-11-08 04:04:25.143330] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:50.272 [2024-11-08 04:04:25.143560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:50.272 [2024-11-08 04:04:25.143715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:50.272 [2024-11-08 04:04:25.143902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:50.272 [2024-11-08 04:04:25.143915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.838 04:04:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:50.838 04:04:25 -- common/autotest_common.sh@862 -- # return 0 00:20:50.838 04:04:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:50.838 04:04:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:50.838 04:04:25 -- common/autotest_common.sh@10 -- # set +x 00:20:50.838 04:04:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:50.838 04:04:25 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:50.838 04:04:25 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:20:51.404 04:04:26 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:20:51.404 04:04:26 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:51.662 04:04:26 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:20:51.662 04:04:26 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:51.920 04:04:27 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:51.920 04:04:27 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:20:51.920 04:04:27 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:51.920 04:04:27 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:51.920 04:04:27 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:52.177 [2024-11-08 04:04:27.227638] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:52.177 04:04:27 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:52.435 04:04:27 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:52.435 04:04:27 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:52.693 04:04:27 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:52.693 04:04:27 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:52.951 04:04:27 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:53.209 [2024-11-08 04:04:28.165251] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.209 04:04:28 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:53.467 04:04:28 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:20:53.467 04:04:28 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:53.467 04:04:28 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:53.467 04:04:28 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:54.402 Initializing NVMe Controllers 00:20:54.402 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:20:54.402 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:20:54.402 Initialization complete. Launching workers. 00:20:54.402 ======================================================== 00:20:54.402 Latency(us) 00:20:54.402 Device Information : IOPS MiB/s Average min max 00:20:54.402 PCIE (0000:00:06.0) NSID 1 from core 0: 23559.75 92.03 1358.84 330.27 7944.34 00:20:54.402 ======================================================== 00:20:54.402 Total : 23559.75 92.03 1358.84 330.27 7944.34 00:20:54.402 00:20:54.402 04:04:29 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:55.775 Initializing NVMe Controllers 00:20:55.775 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:55.775 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:55.775 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:55.775 Initialization complete. Launching workers. 00:20:55.775 ======================================================== 00:20:55.775 Latency(us) 00:20:55.775 Device Information : IOPS MiB/s Average min max 00:20:55.775 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3486.98 13.62 286.53 99.68 7195.53 00:20:55.775 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 122.00 0.48 8237.12 4964.92 15066.89 00:20:55.775 ======================================================== 00:20:55.775 Total : 3608.98 14.10 555.29 99.68 15066.89 00:20:55.775 00:20:55.775 04:04:30 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:57.149 [2024-11-08 04:04:32.142458] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7790f0 is same with the state(5) to be set 00:20:57.149 [2024-11-08 04:04:32.143019] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7790f0 is same with the state(5) to be set 00:20:57.149 [2024-11-08 04:04:32.143088] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7790f0 is same with the state(5) to be set 00:20:57.149 [2024-11-08 04:04:32.143150] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7790f0 is same with the state(5) to be set 00:20:57.149 [2024-11-08 04:04:32.143206] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7790f0 is same with the state(5) to be set 00:20:57.149 [2024-11-08 04:04:32.143260] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7790f0 is same with the state(5) to be set 00:20:57.149 [2024-11-08 04:04:32.143306] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7790f0 is same with the state(5) to be set 00:20:57.149 [2024-11-08 04:04:32.143348] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7790f0 is same with the state(5) to be set 00:20:57.149 [2024-11-08 04:04:32.143400] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7790f0 is same with the state(5) to be set 00:20:57.149 [2024-11-08 04:04:32.143486] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7790f0 is same with the state(5) to be set 00:20:57.149 [2024-11-08 04:04:32.143543] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7790f0 is same with the state(5) to be set 00:20:57.149 Initializing NVMe Controllers 00:20:57.149 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:57.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:57.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:57.149 Initialization complete. Launching workers. 00:20:57.149 ======================================================== 00:20:57.149 Latency(us) 00:20:57.149 Device Information : IOPS MiB/s Average min max 00:20:57.149 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10211.98 39.89 3133.56 589.16 7543.07 00:20:57.149 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2676.09 10.45 12079.92 4611.02 20259.68 00:20:57.149 ======================================================== 00:20:57.149 Total : 12888.07 50.34 4991.19 589.16 20259.68 00:20:57.149 00:20:57.407 04:04:32 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:20:57.408 04:04:32 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:59.939 Initializing NVMe Controllers 00:20:59.939 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:59.939 Controller IO queue size 128, less than required. 00:20:59.939 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:59.939 Controller IO queue size 128, less than required. 00:20:59.939 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:59.939 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:59.939 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:59.939 Initialization complete. Launching workers. 00:20:59.939 ======================================================== 00:20:59.939 Latency(us) 00:20:59.939 Device Information : IOPS MiB/s Average min max 00:20:59.939 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1809.04 452.26 71607.45 53872.95 128787.80 00:20:59.939 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 586.39 146.60 229490.44 143286.17 344795.66 00:20:59.939 ======================================================== 00:20:59.939 Total : 2395.44 598.86 110256.58 53872.95 344795.66 00:20:59.939 00:20:59.939 04:04:34 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:21:00.198 No valid NVMe controllers or AIO or URING devices found 00:21:00.198 Initializing NVMe Controllers 00:21:00.198 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:00.198 Controller IO queue size 128, less than required. 00:21:00.198 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:00.198 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:00.198 Controller IO queue size 128, less than required. 00:21:00.198 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:00.198 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:21:00.198 WARNING: Some requested NVMe devices were skipped 00:21:00.198 04:04:35 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:02.768 Initializing NVMe Controllers 00:21:02.768 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:02.768 Controller IO queue size 128, less than required. 00:21:02.768 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:02.768 Controller IO queue size 128, less than required. 00:21:02.768 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:02.768 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:02.768 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:02.768 Initialization complete. Launching workers. 00:21:02.768 00:21:02.768 ==================== 00:21:02.768 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:02.768 TCP transport: 00:21:02.768 polls: 9124 00:21:02.768 idle_polls: 6268 00:21:02.768 sock_completions: 2856 00:21:02.768 nvme_completions: 4708 00:21:02.768 submitted_requests: 7136 00:21:02.768 queued_requests: 1 00:21:02.768 00:21:02.768 ==================== 00:21:02.768 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:02.768 TCP transport: 00:21:02.768 polls: 12352 00:21:02.768 idle_polls: 9473 00:21:02.768 sock_completions: 2879 00:21:02.768 nvme_completions: 5701 00:21:02.768 submitted_requests: 8675 00:21:02.768 queued_requests: 1 00:21:02.768 ======================================================== 00:21:02.768 Latency(us) 00:21:02.768 Device Information : IOPS MiB/s Average min max 00:21:02.768 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1240.36 310.09 105690.52 65691.11 198973.88 00:21:02.768 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1488.84 372.21 86789.99 44799.85 128436.37 00:21:02.768 ======================================================== 00:21:02.768 Total : 2729.20 682.30 95379.88 44799.85 198973.88 00:21:02.768 00:21:02.768 04:04:37 -- host/perf.sh@66 -- # sync 00:21:02.768 04:04:37 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:03.026 04:04:37 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:21:03.026 04:04:37 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:21:03.026 04:04:37 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:21:03.283 04:04:38 -- host/perf.sh@72 -- # ls_guid=8d27b106-a6f8-4130-8a6a-5050697f5580 00:21:03.283 04:04:38 -- host/perf.sh@73 -- # get_lvs_free_mb 8d27b106-a6f8-4130-8a6a-5050697f5580 00:21:03.283 04:04:38 -- common/autotest_common.sh@1353 -- # local lvs_uuid=8d27b106-a6f8-4130-8a6a-5050697f5580 00:21:03.283 04:04:38 -- common/autotest_common.sh@1354 -- # local lvs_info 00:21:03.283 04:04:38 -- common/autotest_common.sh@1355 -- # local fc 00:21:03.283 04:04:38 -- common/autotest_common.sh@1356 -- # local cs 00:21:03.283 04:04:38 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:03.541 04:04:38 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:21:03.541 { 00:21:03.541 "base_bdev": "Nvme0n1", 00:21:03.541 "block_size": 4096, 00:21:03.541 "cluster_size": 4194304, 00:21:03.541 "free_clusters": 1278, 00:21:03.541 "name": "lvs_0", 00:21:03.541 "total_data_clusters": 1278, 00:21:03.541 "uuid": "8d27b106-a6f8-4130-8a6a-5050697f5580" 00:21:03.541 } 00:21:03.541 ]' 00:21:03.541 04:04:38 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="8d27b106-a6f8-4130-8a6a-5050697f5580") .free_clusters' 00:21:03.541 04:04:38 -- common/autotest_common.sh@1358 -- # fc=1278 00:21:03.541 04:04:38 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="8d27b106-a6f8-4130-8a6a-5050697f5580") .cluster_size' 00:21:03.541 5112 00:21:03.541 04:04:38 -- common/autotest_common.sh@1359 -- # cs=4194304 00:21:03.541 04:04:38 -- common/autotest_common.sh@1362 -- # free_mb=5112 00:21:03.541 04:04:38 -- common/autotest_common.sh@1363 -- # echo 5112 00:21:03.541 04:04:38 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:21:03.541 04:04:38 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8d27b106-a6f8-4130-8a6a-5050697f5580 lbd_0 5112 00:21:03.799 04:04:38 -- host/perf.sh@80 -- # lb_guid=b0605f96-da53-46d9-bc31-59a2b736c3c1 00:21:03.799 04:04:38 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore b0605f96-da53-46d9-bc31-59a2b736c3c1 lvs_n_0 00:21:04.364 04:04:39 -- host/perf.sh@83 -- # ls_nested_guid=3dbdc1b4-c382-4352-8713-a2758354e470 00:21:04.365 04:04:39 -- host/perf.sh@84 -- # get_lvs_free_mb 3dbdc1b4-c382-4352-8713-a2758354e470 00:21:04.365 04:04:39 -- common/autotest_common.sh@1353 -- # local lvs_uuid=3dbdc1b4-c382-4352-8713-a2758354e470 00:21:04.365 04:04:39 -- common/autotest_common.sh@1354 -- # local lvs_info 00:21:04.365 04:04:39 -- common/autotest_common.sh@1355 -- # local fc 00:21:04.365 04:04:39 -- common/autotest_common.sh@1356 -- # local cs 00:21:04.365 04:04:39 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:04.623 04:04:39 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:21:04.623 { 00:21:04.623 "base_bdev": "Nvme0n1", 00:21:04.623 "block_size": 4096, 00:21:04.623 "cluster_size": 4194304, 00:21:04.623 "free_clusters": 0, 00:21:04.623 "name": "lvs_0", 00:21:04.623 "total_data_clusters": 1278, 00:21:04.623 "uuid": "8d27b106-a6f8-4130-8a6a-5050697f5580" 00:21:04.623 }, 00:21:04.623 { 00:21:04.623 "base_bdev": "b0605f96-da53-46d9-bc31-59a2b736c3c1", 00:21:04.623 "block_size": 4096, 00:21:04.623 "cluster_size": 4194304, 00:21:04.623 "free_clusters": 1276, 00:21:04.623 "name": "lvs_n_0", 00:21:04.623 "total_data_clusters": 1276, 00:21:04.623 "uuid": "3dbdc1b4-c382-4352-8713-a2758354e470" 00:21:04.623 } 00:21:04.623 ]' 00:21:04.623 04:04:39 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="3dbdc1b4-c382-4352-8713-a2758354e470") .free_clusters' 00:21:04.623 04:04:39 -- common/autotest_common.sh@1358 -- # fc=1276 00:21:04.623 04:04:39 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="3dbdc1b4-c382-4352-8713-a2758354e470") .cluster_size' 00:21:04.623 5104 00:21:04.623 04:04:39 -- common/autotest_common.sh@1359 -- # cs=4194304 00:21:04.623 04:04:39 -- common/autotest_common.sh@1362 -- # free_mb=5104 00:21:04.623 04:04:39 -- common/autotest_common.sh@1363 -- # echo 5104 00:21:04.623 04:04:39 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:21:04.623 04:04:39 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3dbdc1b4-c382-4352-8713-a2758354e470 lbd_nest_0 5104 00:21:04.881 04:04:39 -- host/perf.sh@88 -- # lb_nested_guid=6825251b-14eb-4a33-af88-3c97dd48a08e 00:21:04.881 04:04:39 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:05.139 04:04:40 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:21:05.139 04:04:40 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 6825251b-14eb-4a33-af88-3c97dd48a08e 00:21:05.397 04:04:40 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:05.655 04:04:40 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:21:05.655 04:04:40 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:21:05.655 04:04:40 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:05.655 04:04:40 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:05.655 04:04:40 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:05.913 No valid NVMe controllers or AIO or URING devices found 00:21:05.913 Initializing NVMe Controllers 00:21:05.913 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:05.913 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:05.913 WARNING: Some requested NVMe devices were skipped 00:21:05.913 04:04:40 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:05.913 04:04:40 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:18.118 Initializing NVMe Controllers 00:21:18.118 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:18.118 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:18.118 Initialization complete. Launching workers. 00:21:18.118 ======================================================== 00:21:18.118 Latency(us) 00:21:18.118 Device Information : IOPS MiB/s Average min max 00:21:18.118 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 865.20 108.15 1154.89 385.75 8651.12 00:21:18.118 ======================================================== 00:21:18.118 Total : 865.20 108.15 1154.89 385.75 8651.12 00:21:18.118 00:21:18.118 04:04:51 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:18.118 04:04:51 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:18.118 04:04:51 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:18.118 No valid NVMe controllers or AIO or URING devices found 00:21:18.118 Initializing NVMe Controllers 00:21:18.118 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:18.118 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:18.118 WARNING: Some requested NVMe devices were skipped 00:21:18.118 04:04:51 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:18.118 04:04:51 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:28.095 [2024-11-08 04:05:01.684476] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90f900 is same with the state(5) to be set 00:21:28.095 [2024-11-08 04:05:01.685247] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90f900 is same with the state(5) to be set 00:21:28.095 [2024-11-08 04:05:01.685362] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90f900 is same with the state(5) to be set 00:21:28.095 [2024-11-08 04:05:01.685496] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90f900 is same with the state(5) to be set 00:21:28.095 [2024-11-08 04:05:01.685567] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90f900 is same with the state(5) to be set 00:21:28.095 [2024-11-08 04:05:01.685660] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90f900 is same with the state(5) to be set 00:21:28.096 Initializing NVMe Controllers 00:21:28.096 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:28.096 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:28.096 Initialization complete. Launching workers. 00:21:28.096 ======================================================== 00:21:28.096 Latency(us) 00:21:28.096 Device Information : IOPS MiB/s Average min max 00:21:28.096 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1162.44 145.30 27531.96 6319.95 237114.88 00:21:28.096 ======================================================== 00:21:28.096 Total : 1162.44 145.30 27531.96 6319.95 237114.88 00:21:28.096 00:21:28.096 04:05:01 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:28.096 04:05:01 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:28.096 04:05:01 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:28.096 No valid NVMe controllers or AIO or URING devices found 00:21:28.096 Initializing NVMe Controllers 00:21:28.096 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:28.096 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:28.096 WARNING: Some requested NVMe devices were skipped 00:21:28.096 04:05:02 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:28.096 04:05:02 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:38.088 Initializing NVMe Controllers 00:21:38.088 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:38.088 Controller IO queue size 128, less than required. 00:21:38.088 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:38.088 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:38.088 Initialization complete. Launching workers. 00:21:38.088 ======================================================== 00:21:38.088 Latency(us) 00:21:38.088 Device Information : IOPS MiB/s Average min max 00:21:38.088 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3739.74 467.47 34224.02 12062.62 74686.96 00:21:38.088 ======================================================== 00:21:38.088 Total : 3739.74 467.47 34224.02 12062.62 74686.96 00:21:38.088 00:21:38.088 04:05:12 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:38.088 04:05:12 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6825251b-14eb-4a33-af88-3c97dd48a08e 00:21:38.088 04:05:13 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:38.346 04:05:13 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b0605f96-da53-46d9-bc31-59a2b736c3c1 00:21:38.604 04:05:13 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:38.604 04:05:13 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:38.604 04:05:13 -- host/perf.sh@114 -- # nvmftestfini 00:21:38.604 04:05:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:38.604 04:05:13 -- nvmf/common.sh@116 -- # sync 00:21:38.861 04:05:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:38.861 04:05:13 -- nvmf/common.sh@119 -- # set +e 00:21:38.861 04:05:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:38.861 04:05:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:38.861 rmmod nvme_tcp 00:21:38.861 rmmod nvme_fabrics 00:21:38.861 rmmod nvme_keyring 00:21:38.861 04:05:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:38.861 04:05:13 -- nvmf/common.sh@123 -- # set -e 00:21:38.861 04:05:13 -- nvmf/common.sh@124 -- # return 0 00:21:38.861 04:05:13 -- nvmf/common.sh@477 -- # '[' -n 83156 ']' 00:21:38.861 04:05:13 -- nvmf/common.sh@478 -- # killprocess 83156 00:21:38.861 04:05:13 -- common/autotest_common.sh@936 -- # '[' -z 83156 ']' 00:21:38.861 04:05:13 -- common/autotest_common.sh@940 -- # kill -0 83156 00:21:38.861 04:05:13 -- common/autotest_common.sh@941 -- # uname 00:21:38.861 04:05:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:38.861 04:05:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83156 00:21:38.861 04:05:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:38.861 04:05:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:38.861 04:05:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83156' 00:21:38.861 killing process with pid 83156 00:21:38.861 04:05:13 -- common/autotest_common.sh@955 -- # kill 83156 00:21:38.861 04:05:13 -- common/autotest_common.sh@960 -- # wait 83156 00:21:39.818 04:05:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:39.818 04:05:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:39.818 04:05:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:39.818 04:05:14 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:39.818 04:05:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:39.818 04:05:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.818 04:05:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:39.818 04:05:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.818 04:05:14 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:39.818 ************************************ 00:21:39.818 END TEST nvmf_perf 00:21:39.818 ************************************ 00:21:39.818 00:21:39.818 real 0m50.468s 00:21:39.818 user 3m10.005s 00:21:39.818 sys 0m10.668s 00:21:39.818 04:05:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:39.818 04:05:14 -- common/autotest_common.sh@10 -- # set +x 00:21:39.818 04:05:14 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:39.818 04:05:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:39.818 04:05:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:39.818 04:05:14 -- common/autotest_common.sh@10 -- # set +x 00:21:39.818 ************************************ 00:21:39.818 START TEST nvmf_fio_host 00:21:39.818 ************************************ 00:21:39.818 04:05:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:39.818 * Looking for test storage... 00:21:39.818 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:39.818 04:05:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:39.818 04:05:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:39.818 04:05:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:40.089 04:05:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:40.089 04:05:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:40.089 04:05:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:40.089 04:05:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:40.089 04:05:14 -- scripts/common.sh@335 -- # IFS=.-: 00:21:40.089 04:05:14 -- scripts/common.sh@335 -- # read -ra ver1 00:21:40.089 04:05:14 -- scripts/common.sh@336 -- # IFS=.-: 00:21:40.089 04:05:14 -- scripts/common.sh@336 -- # read -ra ver2 00:21:40.089 04:05:14 -- scripts/common.sh@337 -- # local 'op=<' 00:21:40.089 04:05:14 -- scripts/common.sh@339 -- # ver1_l=2 00:21:40.089 04:05:14 -- scripts/common.sh@340 -- # ver2_l=1 00:21:40.089 04:05:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:40.089 04:05:14 -- scripts/common.sh@343 -- # case "$op" in 00:21:40.089 04:05:14 -- scripts/common.sh@344 -- # : 1 00:21:40.089 04:05:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:40.089 04:05:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:40.089 04:05:14 -- scripts/common.sh@364 -- # decimal 1 00:21:40.089 04:05:14 -- scripts/common.sh@352 -- # local d=1 00:21:40.089 04:05:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:40.089 04:05:14 -- scripts/common.sh@354 -- # echo 1 00:21:40.089 04:05:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:40.089 04:05:14 -- scripts/common.sh@365 -- # decimal 2 00:21:40.089 04:05:14 -- scripts/common.sh@352 -- # local d=2 00:21:40.089 04:05:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:40.089 04:05:14 -- scripts/common.sh@354 -- # echo 2 00:21:40.089 04:05:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:40.089 04:05:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:40.089 04:05:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:40.089 04:05:14 -- scripts/common.sh@367 -- # return 0 00:21:40.089 04:05:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:40.089 04:05:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:40.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.089 --rc genhtml_branch_coverage=1 00:21:40.089 --rc genhtml_function_coverage=1 00:21:40.089 --rc genhtml_legend=1 00:21:40.089 --rc geninfo_all_blocks=1 00:21:40.089 --rc geninfo_unexecuted_blocks=1 00:21:40.089 00:21:40.089 ' 00:21:40.089 04:05:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:40.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.089 --rc genhtml_branch_coverage=1 00:21:40.089 --rc genhtml_function_coverage=1 00:21:40.089 --rc genhtml_legend=1 00:21:40.089 --rc geninfo_all_blocks=1 00:21:40.089 --rc geninfo_unexecuted_blocks=1 00:21:40.089 00:21:40.089 ' 00:21:40.089 04:05:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:40.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.089 --rc genhtml_branch_coverage=1 00:21:40.089 --rc genhtml_function_coverage=1 00:21:40.089 --rc genhtml_legend=1 00:21:40.089 --rc geninfo_all_blocks=1 00:21:40.089 --rc geninfo_unexecuted_blocks=1 00:21:40.089 00:21:40.089 ' 00:21:40.089 04:05:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:40.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.089 --rc genhtml_branch_coverage=1 00:21:40.089 --rc genhtml_function_coverage=1 00:21:40.089 --rc genhtml_legend=1 00:21:40.089 --rc geninfo_all_blocks=1 00:21:40.089 --rc geninfo_unexecuted_blocks=1 00:21:40.089 00:21:40.089 ' 00:21:40.089 04:05:14 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:40.089 04:05:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:40.089 04:05:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:40.089 04:05:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:40.089 04:05:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.090 04:05:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.090 04:05:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.090 04:05:14 -- paths/export.sh@5 -- # export PATH 00:21:40.090 04:05:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.090 04:05:14 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:40.090 04:05:14 -- nvmf/common.sh@7 -- # uname -s 00:21:40.090 04:05:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:40.090 04:05:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:40.090 04:05:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:40.090 04:05:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:40.090 04:05:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:40.090 04:05:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:40.090 04:05:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:40.090 04:05:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:40.090 04:05:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:40.090 04:05:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:40.090 04:05:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:21:40.090 04:05:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:21:40.090 04:05:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:40.090 04:05:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:40.090 04:05:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:40.090 04:05:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:40.090 04:05:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:40.090 04:05:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:40.090 04:05:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:40.090 04:05:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.090 04:05:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.090 04:05:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.090 04:05:14 -- paths/export.sh@5 -- # export PATH 00:21:40.090 04:05:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.090 04:05:14 -- nvmf/common.sh@46 -- # : 0 00:21:40.090 04:05:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:40.090 04:05:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:40.090 04:05:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:40.090 04:05:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:40.090 04:05:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:40.090 04:05:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:40.090 04:05:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:40.090 04:05:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:40.090 04:05:14 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:40.090 04:05:14 -- host/fio.sh@14 -- # nvmftestinit 00:21:40.090 04:05:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:40.090 04:05:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:40.090 04:05:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:40.090 04:05:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:40.090 04:05:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:40.090 04:05:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.090 04:05:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:40.090 04:05:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.090 04:05:14 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:40.090 04:05:14 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:40.090 04:05:14 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:40.090 04:05:14 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:40.090 04:05:14 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:40.090 04:05:14 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:40.090 04:05:14 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:40.090 04:05:14 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:40.090 04:05:14 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:40.090 04:05:14 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:40.090 04:05:14 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:40.090 04:05:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:40.090 04:05:14 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:40.090 04:05:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:40.090 04:05:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:40.090 04:05:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:40.090 04:05:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:40.090 04:05:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:40.090 04:05:14 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:40.090 04:05:15 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:40.090 Cannot find device "nvmf_tgt_br" 00:21:40.090 04:05:15 -- nvmf/common.sh@154 -- # true 00:21:40.090 04:05:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:40.090 Cannot find device "nvmf_tgt_br2" 00:21:40.090 04:05:15 -- nvmf/common.sh@155 -- # true 00:21:40.090 04:05:15 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:40.090 04:05:15 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:40.090 Cannot find device "nvmf_tgt_br" 00:21:40.090 04:05:15 -- nvmf/common.sh@157 -- # true 00:21:40.090 04:05:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:40.090 Cannot find device "nvmf_tgt_br2" 00:21:40.090 04:05:15 -- nvmf/common.sh@158 -- # true 00:21:40.090 04:05:15 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:40.090 04:05:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:40.090 04:05:15 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:40.090 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:40.090 04:05:15 -- nvmf/common.sh@161 -- # true 00:21:40.090 04:05:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:40.090 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:40.090 04:05:15 -- nvmf/common.sh@162 -- # true 00:21:40.090 04:05:15 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:40.090 04:05:15 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:40.090 04:05:15 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:40.090 04:05:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:40.090 04:05:15 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:40.090 04:05:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:40.349 04:05:15 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:40.349 04:05:15 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:40.349 04:05:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:40.349 04:05:15 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:40.349 04:05:15 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:40.349 04:05:15 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:40.349 04:05:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:40.349 04:05:15 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:40.349 04:05:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:40.349 04:05:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:40.349 04:05:15 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:40.349 04:05:15 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:40.349 04:05:15 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:40.349 04:05:15 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:40.349 04:05:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:40.349 04:05:15 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:40.349 04:05:15 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:40.349 04:05:15 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:40.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:40.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:21:40.349 00:21:40.349 --- 10.0.0.2 ping statistics --- 00:21:40.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.349 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:21:40.349 04:05:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:40.349 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:40.349 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.030 ms 00:21:40.349 00:21:40.349 --- 10.0.0.3 ping statistics --- 00:21:40.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.349 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:21:40.349 04:05:15 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:40.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:40.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:21:40.349 00:21:40.349 --- 10.0.0.1 ping statistics --- 00:21:40.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.349 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:21:40.349 04:05:15 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:40.349 04:05:15 -- nvmf/common.sh@421 -- # return 0 00:21:40.349 04:05:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:40.349 04:05:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:40.349 04:05:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:40.349 04:05:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:40.349 04:05:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:40.349 04:05:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:40.349 04:05:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:40.349 04:05:15 -- host/fio.sh@16 -- # [[ y != y ]] 00:21:40.349 04:05:15 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:40.349 04:05:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:40.349 04:05:15 -- common/autotest_common.sh@10 -- # set +x 00:21:40.349 04:05:15 -- host/fio.sh@24 -- # nvmfpid=84125 00:21:40.349 04:05:15 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:40.349 04:05:15 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:40.349 04:05:15 -- host/fio.sh@28 -- # waitforlisten 84125 00:21:40.349 04:05:15 -- common/autotest_common.sh@829 -- # '[' -z 84125 ']' 00:21:40.349 04:05:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.349 04:05:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:40.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.349 04:05:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.349 04:05:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:40.349 04:05:15 -- common/autotest_common.sh@10 -- # set +x 00:21:40.349 [2024-11-08 04:05:15.399550] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:40.349 [2024-11-08 04:05:15.399797] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.608 [2024-11-08 04:05:15.539153] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:40.608 [2024-11-08 04:05:15.638356] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:40.608 [2024-11-08 04:05:15.638800] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.608 [2024-11-08 04:05:15.638823] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.608 [2024-11-08 04:05:15.638834] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.608 [2024-11-08 04:05:15.639005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.608 [2024-11-08 04:05:15.639146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:40.608 [2024-11-08 04:05:15.639321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:40.608 [2024-11-08 04:05:15.639332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.543 04:05:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:41.543 04:05:16 -- common/autotest_common.sh@862 -- # return 0 00:21:41.543 04:05:16 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:41.543 [2024-11-08 04:05:16.618328] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.802 04:05:16 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:41.802 04:05:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:41.802 04:05:16 -- common/autotest_common.sh@10 -- # set +x 00:21:41.802 04:05:16 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:42.061 Malloc1 00:21:42.061 04:05:17 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:42.320 04:05:17 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:42.578 04:05:17 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:42.836 [2024-11-08 04:05:17.796033] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:42.836 04:05:17 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:43.096 04:05:18 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:21:43.096 04:05:18 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:43.096 04:05:18 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:43.096 04:05:18 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:43.096 04:05:18 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:43.096 04:05:18 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:43.096 04:05:18 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:43.096 04:05:18 -- common/autotest_common.sh@1330 -- # shift 00:21:43.096 04:05:18 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:43.096 04:05:18 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:43.096 04:05:18 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:43.096 04:05:18 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:43.096 04:05:18 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:43.096 04:05:18 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:43.096 04:05:18 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:43.096 04:05:18 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:43.096 04:05:18 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:43.096 04:05:18 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:43.096 04:05:18 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:43.096 04:05:18 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:43.096 04:05:18 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:43.096 04:05:18 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:43.096 04:05:18 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:43.355 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:43.355 fio-3.35 00:21:43.355 Starting 1 thread 00:21:45.885 00:21:45.885 test: (groupid=0, jobs=1): err= 0: pid=84258: Fri Nov 8 04:05:20 2024 00:21:45.885 read: IOPS=10.6k, BW=41.3MiB/s (43.3MB/s)(82.9MiB/2007msec) 00:21:45.885 slat (nsec): min=1690, max=1190.1k, avg=2275.63, stdev=8784.02 00:21:45.885 clat (usec): min=3452, max=12799, avg=6412.84, stdev=619.35 00:21:45.885 lat (usec): min=3513, max=12801, avg=6415.11, stdev=619.60 00:21:45.885 clat percentiles (usec): 00:21:45.885 | 1.00th=[ 5211], 5.00th=[ 5604], 10.00th=[ 5735], 20.00th=[ 5932], 00:21:45.885 | 30.00th=[ 6128], 40.00th=[ 6194], 50.00th=[ 6325], 60.00th=[ 6456], 00:21:45.885 | 70.00th=[ 6652], 80.00th=[ 6849], 90.00th=[ 7111], 95.00th=[ 7373], 00:21:45.885 | 99.00th=[ 8225], 99.50th=[ 9372], 99.90th=[10814], 99.95th=[11338], 00:21:45.885 | 99.99th=[12649] 00:21:45.885 bw ( KiB/s): min=41016, max=43576, per=100.00%, avg=42306.00, stdev=1057.36, samples=4 00:21:45.885 iops : min=10254, max=10894, avg=10576.50, stdev=264.34, samples=4 00:21:45.885 write: IOPS=10.6k, BW=41.3MiB/s (43.3MB/s)(82.8MiB/2007msec); 0 zone resets 00:21:45.885 slat (nsec): min=1767, max=329314, avg=2312.36, stdev=2752.84 00:21:45.885 clat (usec): min=2583, max=12668, avg=5650.84, stdev=529.01 00:21:45.885 lat (usec): min=2597, max=12670, avg=5653.15, stdev=529.10 00:21:45.885 clat percentiles (usec): 00:21:45.885 | 1.00th=[ 4621], 5.00th=[ 4948], 10.00th=[ 5080], 20.00th=[ 5276], 00:21:45.885 | 30.00th=[ 5407], 40.00th=[ 5538], 50.00th=[ 5604], 60.00th=[ 5735], 00:21:45.885 | 70.00th=[ 5866], 80.00th=[ 5997], 90.00th=[ 6194], 95.00th=[ 6390], 00:21:45.886 | 99.00th=[ 7046], 99.50th=[ 7570], 99.90th=[10814], 99.95th=[11994], 00:21:45.886 | 99.99th=[12518] 00:21:45.886 bw ( KiB/s): min=41544, max=43904, per=100.00%, avg=42290.00, stdev=1098.21, samples=4 00:21:45.886 iops : min=10386, max=10976, avg=10572.50, stdev=274.55, samples=4 00:21:45.886 lat (msec) : 4=0.08%, 10=99.73%, 20=0.19% 00:21:45.886 cpu : usr=66.15%, sys=24.43%, ctx=27, majf=0, minf=5 00:21:45.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:21:45.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:45.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:45.886 issued rwts: total=21218,21207,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:45.886 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:45.886 00:21:45.886 Run status group 0 (all jobs): 00:21:45.886 READ: bw=41.3MiB/s (43.3MB/s), 41.3MiB/s-41.3MiB/s (43.3MB/s-43.3MB/s), io=82.9MiB (86.9MB), run=2007-2007msec 00:21:45.886 WRITE: bw=41.3MiB/s (43.3MB/s), 41.3MiB/s-41.3MiB/s (43.3MB/s-43.3MB/s), io=82.8MiB (86.9MB), run=2007-2007msec 00:21:45.886 04:05:20 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:45.886 04:05:20 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:45.886 04:05:20 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:45.886 04:05:20 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:45.886 04:05:20 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:45.886 04:05:20 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:45.886 04:05:20 -- common/autotest_common.sh@1330 -- # shift 00:21:45.886 04:05:20 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:45.886 04:05:20 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:45.886 04:05:20 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:45.886 04:05:20 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:45.886 04:05:20 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:45.886 04:05:20 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:45.886 04:05:20 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:45.886 04:05:20 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:45.886 04:05:20 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:45.886 04:05:20 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:45.886 04:05:20 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:45.886 04:05:20 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:45.886 04:05:20 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:45.886 04:05:20 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:45.886 04:05:20 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:45.886 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:45.886 fio-3.35 00:21:45.886 Starting 1 thread 00:21:48.415 00:21:48.415 test: (groupid=0, jobs=1): err= 0: pid=84307: Fri Nov 8 04:05:23 2024 00:21:48.415 read: IOPS=8491, BW=133MiB/s (139MB/s)(266MiB/2001msec) 00:21:48.415 slat (nsec): min=2672, max=99021, avg=3593.38, stdev=2475.66 00:21:48.415 clat (usec): min=2581, max=18926, avg=8979.50, stdev=2338.07 00:21:48.415 lat (usec): min=2585, max=18930, avg=8983.10, stdev=2338.36 00:21:48.415 clat percentiles (usec): 00:21:48.415 | 1.00th=[ 4817], 5.00th=[ 5735], 10.00th=[ 6259], 20.00th=[ 7046], 00:21:48.415 | 30.00th=[ 7701], 40.00th=[ 8160], 50.00th=[ 8717], 60.00th=[ 9372], 00:21:48.415 | 70.00th=[ 9896], 80.00th=[10421], 90.00th=[11863], 95.00th=[13566], 00:21:48.415 | 99.00th=[16319], 99.50th=[17171], 99.90th=[18220], 99.95th=[18482], 00:21:48.415 | 99.99th=[18744] 00:21:48.415 bw ( KiB/s): min=67232, max=69504, per=50.20%, avg=68202.67, stdev=1171.54, samples=3 00:21:48.415 iops : min= 4202, max= 4344, avg=4262.67, stdev=73.22, samples=3 00:21:48.415 write: IOPS=4865, BW=76.0MiB/s (79.7MB/s)(142MiB/1866msec); 0 zone resets 00:21:48.415 slat (usec): min=29, max=276, avg=34.95, stdev=10.39 00:21:48.415 clat (usec): min=3205, max=20952, avg=10727.33, stdev=2038.58 00:21:48.415 lat (usec): min=3235, max=20998, avg=10762.27, stdev=2041.10 00:21:48.415 clat percentiles (usec): 00:21:48.415 | 1.00th=[ 7308], 5.00th=[ 8094], 10.00th=[ 8586], 20.00th=[ 9110], 00:21:48.415 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10421], 60.00th=[10945], 00:21:48.415 | 70.00th=[11338], 80.00th=[11994], 90.00th=[13304], 95.00th=[14746], 00:21:48.415 | 99.00th=[17433], 99.50th=[17957], 99.90th=[19006], 99.95th=[19006], 00:21:48.415 | 99.99th=[20841] 00:21:48.415 bw ( KiB/s): min=70496, max=72576, per=91.51%, avg=71242.67, stdev=1157.47, samples=3 00:21:48.415 iops : min= 4406, max= 4536, avg=4452.67, stdev=72.34, samples=3 00:21:48.415 lat (msec) : 4=0.25%, 10=59.50%, 20=40.25%, 50=0.01% 00:21:48.415 cpu : usr=64.10%, sys=22.35%, ctx=23, majf=0, minf=1 00:21:48.415 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:48.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:48.415 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:48.415 issued rwts: total=16992,9079,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:48.415 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:48.415 00:21:48.415 Run status group 0 (all jobs): 00:21:48.415 READ: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=266MiB (278MB), run=2001-2001msec 00:21:48.415 WRITE: bw=76.0MiB/s (79.7MB/s), 76.0MiB/s-76.0MiB/s (79.7MB/s-79.7MB/s), io=142MiB (149MB), run=1866-1866msec 00:21:48.415 04:05:23 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:48.415 04:05:23 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:21:48.415 04:05:23 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:21:48.415 04:05:23 -- host/fio.sh@51 -- # get_nvme_bdfs 00:21:48.415 04:05:23 -- common/autotest_common.sh@1508 -- # bdfs=() 00:21:48.415 04:05:23 -- common/autotest_common.sh@1508 -- # local bdfs 00:21:48.415 04:05:23 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:48.415 04:05:23 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:48.415 04:05:23 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:21:48.415 04:05:23 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:21:48.416 04:05:23 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:21:48.416 04:05:23 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:21:48.674 Nvme0n1 00:21:48.674 04:05:23 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:21:48.933 04:05:23 -- host/fio.sh@53 -- # ls_guid=e07abc00-be03-4d98-94e9-822e1c347df0 00:21:48.933 04:05:23 -- host/fio.sh@54 -- # get_lvs_free_mb e07abc00-be03-4d98-94e9-822e1c347df0 00:21:48.933 04:05:23 -- common/autotest_common.sh@1353 -- # local lvs_uuid=e07abc00-be03-4d98-94e9-822e1c347df0 00:21:48.933 04:05:23 -- common/autotest_common.sh@1354 -- # local lvs_info 00:21:48.933 04:05:23 -- common/autotest_common.sh@1355 -- # local fc 00:21:48.933 04:05:23 -- common/autotest_common.sh@1356 -- # local cs 00:21:48.933 04:05:23 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:49.191 04:05:24 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:21:49.191 { 00:21:49.191 "base_bdev": "Nvme0n1", 00:21:49.191 "block_size": 4096, 00:21:49.191 "cluster_size": 1073741824, 00:21:49.191 "free_clusters": 4, 00:21:49.191 "name": "lvs_0", 00:21:49.191 "total_data_clusters": 4, 00:21:49.191 "uuid": "e07abc00-be03-4d98-94e9-822e1c347df0" 00:21:49.191 } 00:21:49.191 ]' 00:21:49.191 04:05:24 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="e07abc00-be03-4d98-94e9-822e1c347df0") .free_clusters' 00:21:49.191 04:05:24 -- common/autotest_common.sh@1358 -- # fc=4 00:21:49.191 04:05:24 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="e07abc00-be03-4d98-94e9-822e1c347df0") .cluster_size' 00:21:49.191 04:05:24 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:21:49.191 04:05:24 -- common/autotest_common.sh@1362 -- # free_mb=4096 00:21:49.191 4096 00:21:49.191 04:05:24 -- common/autotest_common.sh@1363 -- # echo 4096 00:21:49.191 04:05:24 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:21:49.450 574d9691-abf7-4084-af3d-1f3a190a4049 00:21:49.450 04:05:24 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:21:50.017 04:05:24 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:21:50.017 04:05:25 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:50.276 04:05:25 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:50.276 04:05:25 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:50.276 04:05:25 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:50.276 04:05:25 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:50.276 04:05:25 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:50.276 04:05:25 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:50.276 04:05:25 -- common/autotest_common.sh@1330 -- # shift 00:21:50.276 04:05:25 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:50.276 04:05:25 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:50.276 04:05:25 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:50.276 04:05:25 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:50.276 04:05:25 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:50.276 04:05:25 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:50.276 04:05:25 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:50.276 04:05:25 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:50.276 04:05:25 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:50.276 04:05:25 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:50.276 04:05:25 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:50.276 04:05:25 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:50.276 04:05:25 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:50.276 04:05:25 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:50.276 04:05:25 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:50.534 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:50.534 fio-3.35 00:21:50.534 Starting 1 thread 00:21:53.068 00:21:53.068 test: (groupid=0, jobs=1): err= 0: pid=84458: Fri Nov 8 04:05:27 2024 00:21:53.068 read: IOPS=6432, BW=25.1MiB/s (26.3MB/s)(50.5MiB/2009msec) 00:21:53.068 slat (nsec): min=1680, max=265073, avg=2167.03, stdev=2845.89 00:21:53.068 clat (usec): min=4208, max=19217, avg=10521.51, stdev=971.63 00:21:53.068 lat (usec): min=4214, max=19219, avg=10523.67, stdev=971.52 00:21:53.068 clat percentiles (usec): 00:21:53.068 | 1.00th=[ 8455], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9765], 00:21:53.068 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10552], 60.00th=[10683], 00:21:53.068 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11731], 95.00th=[12125], 00:21:53.068 | 99.00th=[12911], 99.50th=[13435], 99.90th=[16909], 99.95th=[18220], 00:21:53.068 | 99.99th=[19268] 00:21:53.068 bw ( KiB/s): min=24736, max=26208, per=99.96%, avg=25720.00, stdev=668.88, samples=4 00:21:53.068 iops : min= 6184, max= 6552, avg=6430.00, stdev=167.22, samples=4 00:21:53.068 write: IOPS=6436, BW=25.1MiB/s (26.4MB/s)(50.5MiB/2009msec); 0 zone resets 00:21:53.068 slat (nsec): min=1766, max=162330, avg=2257.84, stdev=1635.92 00:21:53.068 clat (usec): min=1891, max=18013, avg=9279.47, stdev=872.15 00:21:53.068 lat (usec): min=1897, max=18015, avg=9281.72, stdev=872.11 00:21:53.068 clat percentiles (usec): 00:21:53.068 | 1.00th=[ 7439], 5.00th=[ 8029], 10.00th=[ 8291], 20.00th=[ 8586], 00:21:53.068 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9503], 00:21:53.068 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[10552], 00:21:53.068 | 99.00th=[11338], 99.50th=[11863], 99.90th=[15401], 99.95th=[16581], 00:21:53.068 | 99.99th=[17957] 00:21:53.068 bw ( KiB/s): min=25344, max=25984, per=99.95%, avg=25734.00, stdev=296.42, samples=4 00:21:53.068 iops : min= 6336, max= 6496, avg=6433.50, stdev=74.11, samples=4 00:21:53.068 lat (msec) : 2=0.01%, 4=0.03%, 10=56.11%, 20=43.85% 00:21:53.068 cpu : usr=70.22%, sys=23.61%, ctx=7, majf=0, minf=5 00:21:53.068 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:53.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:53.068 issued rwts: total=12923,12931,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:53.068 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:53.068 00:21:53.068 Run status group 0 (all jobs): 00:21:53.068 READ: bw=25.1MiB/s (26.3MB/s), 25.1MiB/s-25.1MiB/s (26.3MB/s-26.3MB/s), io=50.5MiB (52.9MB), run=2009-2009msec 00:21:53.068 WRITE: bw=25.1MiB/s (26.4MB/s), 25.1MiB/s-25.1MiB/s (26.4MB/s-26.4MB/s), io=50.5MiB (53.0MB), run=2009-2009msec 00:21:53.068 04:05:27 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:53.068 04:05:28 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:21:53.326 04:05:28 -- host/fio.sh@64 -- # ls_nested_guid=766f722e-f7c6-47e5-ae28-33f727e65266 00:21:53.326 04:05:28 -- host/fio.sh@65 -- # get_lvs_free_mb 766f722e-f7c6-47e5-ae28-33f727e65266 00:21:53.326 04:05:28 -- common/autotest_common.sh@1353 -- # local lvs_uuid=766f722e-f7c6-47e5-ae28-33f727e65266 00:21:53.326 04:05:28 -- common/autotest_common.sh@1354 -- # local lvs_info 00:21:53.326 04:05:28 -- common/autotest_common.sh@1355 -- # local fc 00:21:53.326 04:05:28 -- common/autotest_common.sh@1356 -- # local cs 00:21:53.326 04:05:28 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:53.585 04:05:28 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:21:53.585 { 00:21:53.585 "base_bdev": "Nvme0n1", 00:21:53.585 "block_size": 4096, 00:21:53.585 "cluster_size": 1073741824, 00:21:53.585 "free_clusters": 0, 00:21:53.585 "name": "lvs_0", 00:21:53.585 "total_data_clusters": 4, 00:21:53.585 "uuid": "e07abc00-be03-4d98-94e9-822e1c347df0" 00:21:53.585 }, 00:21:53.585 { 00:21:53.585 "base_bdev": "574d9691-abf7-4084-af3d-1f3a190a4049", 00:21:53.585 "block_size": 4096, 00:21:53.585 "cluster_size": 4194304, 00:21:53.585 "free_clusters": 1022, 00:21:53.585 "name": "lvs_n_0", 00:21:53.585 "total_data_clusters": 1022, 00:21:53.585 "uuid": "766f722e-f7c6-47e5-ae28-33f727e65266" 00:21:53.585 } 00:21:53.585 ]' 00:21:53.585 04:05:28 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="766f722e-f7c6-47e5-ae28-33f727e65266") .free_clusters' 00:21:53.585 04:05:28 -- common/autotest_common.sh@1358 -- # fc=1022 00:21:53.585 04:05:28 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="766f722e-f7c6-47e5-ae28-33f727e65266") .cluster_size' 00:21:53.585 04:05:28 -- common/autotest_common.sh@1359 -- # cs=4194304 00:21:53.585 04:05:28 -- common/autotest_common.sh@1362 -- # free_mb=4088 00:21:53.585 4088 00:21:53.585 04:05:28 -- common/autotest_common.sh@1363 -- # echo 4088 00:21:53.585 04:05:28 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:21:53.844 a80170c3-83b8-49da-a459-5b9383ef2524 00:21:53.844 04:05:28 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:21:54.103 04:05:29 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:21:54.362 04:05:29 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:54.621 04:05:29 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:54.621 04:05:29 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:54.621 04:05:29 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:54.621 04:05:29 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:54.621 04:05:29 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:54.621 04:05:29 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:54.621 04:05:29 -- common/autotest_common.sh@1330 -- # shift 00:21:54.621 04:05:29 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:54.621 04:05:29 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:54.621 04:05:29 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:54.621 04:05:29 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:54.621 04:05:29 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:54.621 04:05:29 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:54.621 04:05:29 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:54.621 04:05:29 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:54.621 04:05:29 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:54.621 04:05:29 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:54.621 04:05:29 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:54.621 04:05:29 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:54.621 04:05:29 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:54.621 04:05:29 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:54.621 04:05:29 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:54.621 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:54.621 fio-3.35 00:21:54.621 Starting 1 thread 00:21:57.155 00:21:57.155 test: (groupid=0, jobs=1): err= 0: pid=84582: Fri Nov 8 04:05:31 2024 00:21:57.155 read: IOPS=6602, BW=25.8MiB/s (27.0MB/s)(51.8MiB/2007msec) 00:21:57.155 slat (nsec): min=1735, max=328605, avg=2648.75, stdev=4306.21 00:21:57.155 clat (usec): min=4150, max=17467, avg=10398.66, stdev=1117.56 00:21:57.155 lat (usec): min=4159, max=17469, avg=10401.31, stdev=1117.40 00:21:57.155 clat percentiles (usec): 00:21:57.155 | 1.00th=[ 8029], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[ 9503], 00:21:57.155 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10290], 60.00th=[10683], 00:21:57.155 | 70.00th=[10945], 80.00th=[11338], 90.00th=[11731], 95.00th=[12256], 00:21:57.155 | 99.00th=[13304], 99.50th=[13960], 99.90th=[15533], 99.95th=[15926], 00:21:57.155 | 99.99th=[16909] 00:21:57.155 bw ( KiB/s): min=25088, max=27296, per=99.85%, avg=26372.00, stdev=1085.67, samples=4 00:21:57.155 iops : min= 6272, max= 6824, avg=6593.00, stdev=271.42, samples=4 00:21:57.155 write: IOPS=6611, BW=25.8MiB/s (27.1MB/s)(51.8MiB/2007msec); 0 zone resets 00:21:57.155 slat (nsec): min=1836, max=267675, avg=2836.58, stdev=3615.48 00:21:57.155 clat (usec): min=2436, max=15689, avg=8916.35, stdev=956.99 00:21:57.155 lat (usec): min=2448, max=15691, avg=8919.19, stdev=956.90 00:21:57.155 clat percentiles (usec): 00:21:57.155 | 1.00th=[ 6915], 5.00th=[ 7504], 10.00th=[ 7832], 20.00th=[ 8160], 00:21:57.155 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 9110], 00:21:57.155 | 70.00th=[ 9372], 80.00th=[ 9634], 90.00th=[10028], 95.00th=[10421], 00:21:57.155 | 99.00th=[11469], 99.50th=[12125], 99.90th=[15008], 99.95th=[15401], 00:21:57.155 | 99.99th=[15664] 00:21:57.156 bw ( KiB/s): min=26048, max=26944, per=99.89%, avg=26418.00, stdev=397.67, samples=4 00:21:57.156 iops : min= 6512, max= 6736, avg=6604.50, stdev=99.42, samples=4 00:21:57.156 lat (msec) : 4=0.03%, 10=63.47%, 20=36.50% 00:21:57.156 cpu : usr=70.99%, sys=21.83%, ctx=6, majf=0, minf=5 00:21:57.156 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:57.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.156 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:57.156 issued rwts: total=13252,13270,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:57.156 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:57.156 00:21:57.156 Run status group 0 (all jobs): 00:21:57.156 READ: bw=25.8MiB/s (27.0MB/s), 25.8MiB/s-25.8MiB/s (27.0MB/s-27.0MB/s), io=51.8MiB (54.3MB), run=2007-2007msec 00:21:57.156 WRITE: bw=25.8MiB/s (27.1MB/s), 25.8MiB/s-25.8MiB/s (27.1MB/s-27.1MB/s), io=51.8MiB (54.4MB), run=2007-2007msec 00:21:57.156 04:05:32 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:57.156 04:05:32 -- host/fio.sh@74 -- # sync 00:21:57.156 04:05:32 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:21:57.751 04:05:32 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:57.751 04:05:32 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:21:58.010 04:05:32 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:58.268 04:05:33 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:21:59.205 04:05:34 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:59.205 04:05:34 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:59.205 04:05:34 -- host/fio.sh@86 -- # nvmftestfini 00:21:59.205 04:05:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:59.205 04:05:34 -- nvmf/common.sh@116 -- # sync 00:21:59.205 04:05:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:59.205 04:05:34 -- nvmf/common.sh@119 -- # set +e 00:21:59.205 04:05:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:59.205 04:05:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:59.205 rmmod nvme_tcp 00:21:59.205 rmmod nvme_fabrics 00:21:59.205 rmmod nvme_keyring 00:21:59.205 04:05:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:59.205 04:05:34 -- nvmf/common.sh@123 -- # set -e 00:21:59.205 04:05:34 -- nvmf/common.sh@124 -- # return 0 00:21:59.205 04:05:34 -- nvmf/common.sh@477 -- # '[' -n 84125 ']' 00:21:59.205 04:05:34 -- nvmf/common.sh@478 -- # killprocess 84125 00:21:59.205 04:05:34 -- common/autotest_common.sh@936 -- # '[' -z 84125 ']' 00:21:59.205 04:05:34 -- common/autotest_common.sh@940 -- # kill -0 84125 00:21:59.205 04:05:34 -- common/autotest_common.sh@941 -- # uname 00:21:59.205 04:05:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:59.205 04:05:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84125 00:21:59.205 04:05:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:59.205 04:05:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:59.205 killing process with pid 84125 00:21:59.205 04:05:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84125' 00:21:59.205 04:05:34 -- common/autotest_common.sh@955 -- # kill 84125 00:21:59.205 04:05:34 -- common/autotest_common.sh@960 -- # wait 84125 00:21:59.464 04:05:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:59.464 04:05:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:59.464 04:05:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:59.464 04:05:34 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:59.464 04:05:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:59.464 04:05:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.464 04:05:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:59.464 04:05:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.723 04:05:34 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:59.723 ************************************ 00:21:59.723 END TEST nvmf_fio_host 00:21:59.723 ************************************ 00:21:59.723 00:21:59.723 real 0m19.791s 00:21:59.723 user 1m25.790s 00:21:59.723 sys 0m4.595s 00:21:59.723 04:05:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:59.723 04:05:34 -- common/autotest_common.sh@10 -- # set +x 00:21:59.723 04:05:34 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:59.723 04:05:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:59.723 04:05:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:59.723 04:05:34 -- common/autotest_common.sh@10 -- # set +x 00:21:59.723 ************************************ 00:21:59.723 START TEST nvmf_failover 00:21:59.723 ************************************ 00:21:59.723 04:05:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:59.723 * Looking for test storage... 00:21:59.723 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:59.723 04:05:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:59.723 04:05:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:59.723 04:05:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:59.723 04:05:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:59.723 04:05:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:59.723 04:05:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:59.723 04:05:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:59.723 04:05:34 -- scripts/common.sh@335 -- # IFS=.-: 00:21:59.723 04:05:34 -- scripts/common.sh@335 -- # read -ra ver1 00:21:59.723 04:05:34 -- scripts/common.sh@336 -- # IFS=.-: 00:21:59.723 04:05:34 -- scripts/common.sh@336 -- # read -ra ver2 00:21:59.723 04:05:34 -- scripts/common.sh@337 -- # local 'op=<' 00:21:59.723 04:05:34 -- scripts/common.sh@339 -- # ver1_l=2 00:21:59.723 04:05:34 -- scripts/common.sh@340 -- # ver2_l=1 00:21:59.723 04:05:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:59.723 04:05:34 -- scripts/common.sh@343 -- # case "$op" in 00:21:59.723 04:05:34 -- scripts/common.sh@344 -- # : 1 00:21:59.723 04:05:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:59.723 04:05:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:59.723 04:05:34 -- scripts/common.sh@364 -- # decimal 1 00:21:59.723 04:05:34 -- scripts/common.sh@352 -- # local d=1 00:21:59.723 04:05:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:59.723 04:05:34 -- scripts/common.sh@354 -- # echo 1 00:21:59.723 04:05:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:59.724 04:05:34 -- scripts/common.sh@365 -- # decimal 2 00:21:59.724 04:05:34 -- scripts/common.sh@352 -- # local d=2 00:21:59.724 04:05:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:59.724 04:05:34 -- scripts/common.sh@354 -- # echo 2 00:21:59.724 04:05:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:59.724 04:05:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:59.724 04:05:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:59.724 04:05:34 -- scripts/common.sh@367 -- # return 0 00:21:59.724 04:05:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:59.724 04:05:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:59.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.724 --rc genhtml_branch_coverage=1 00:21:59.724 --rc genhtml_function_coverage=1 00:21:59.724 --rc genhtml_legend=1 00:21:59.724 --rc geninfo_all_blocks=1 00:21:59.724 --rc geninfo_unexecuted_blocks=1 00:21:59.724 00:21:59.724 ' 00:21:59.724 04:05:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:59.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.724 --rc genhtml_branch_coverage=1 00:21:59.724 --rc genhtml_function_coverage=1 00:21:59.724 --rc genhtml_legend=1 00:21:59.724 --rc geninfo_all_blocks=1 00:21:59.724 --rc geninfo_unexecuted_blocks=1 00:21:59.724 00:21:59.724 ' 00:21:59.724 04:05:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:59.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.724 --rc genhtml_branch_coverage=1 00:21:59.724 --rc genhtml_function_coverage=1 00:21:59.724 --rc genhtml_legend=1 00:21:59.724 --rc geninfo_all_blocks=1 00:21:59.724 --rc geninfo_unexecuted_blocks=1 00:21:59.724 00:21:59.724 ' 00:21:59.724 04:05:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:59.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.724 --rc genhtml_branch_coverage=1 00:21:59.724 --rc genhtml_function_coverage=1 00:21:59.724 --rc genhtml_legend=1 00:21:59.724 --rc geninfo_all_blocks=1 00:21:59.724 --rc geninfo_unexecuted_blocks=1 00:21:59.724 00:21:59.724 ' 00:21:59.724 04:05:34 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:59.724 04:05:34 -- nvmf/common.sh@7 -- # uname -s 00:21:59.724 04:05:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.724 04:05:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.724 04:05:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.724 04:05:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.724 04:05:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.724 04:05:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.724 04:05:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.724 04:05:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.724 04:05:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.724 04:05:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.724 04:05:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:21:59.724 04:05:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:21:59.724 04:05:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.724 04:05:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.724 04:05:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:59.724 04:05:34 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:59.724 04:05:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.724 04:05:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.724 04:05:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.724 04:05:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.724 04:05:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.724 04:05:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.724 04:05:34 -- paths/export.sh@5 -- # export PATH 00:21:59.724 04:05:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.724 04:05:34 -- nvmf/common.sh@46 -- # : 0 00:21:59.724 04:05:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:59.724 04:05:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:59.724 04:05:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:59.724 04:05:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.724 04:05:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.724 04:05:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:59.724 04:05:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:59.724 04:05:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:59.724 04:05:34 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:59.724 04:05:34 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:59.724 04:05:34 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:59.724 04:05:34 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:59.724 04:05:34 -- host/failover.sh@18 -- # nvmftestinit 00:21:59.724 04:05:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:59.724 04:05:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:59.724 04:05:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:59.724 04:05:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:59.724 04:05:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:59.724 04:05:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.724 04:05:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:59.724 04:05:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.724 04:05:34 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:59.724 04:05:34 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:59.724 04:05:34 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:59.724 04:05:34 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:59.724 04:05:34 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:59.724 04:05:34 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:59.724 04:05:34 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:59.724 04:05:34 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:59.724 04:05:34 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:59.724 04:05:34 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:59.724 04:05:34 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:59.724 04:05:34 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:59.724 04:05:34 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:59.724 04:05:34 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:59.724 04:05:34 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:59.724 04:05:34 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:59.724 04:05:34 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:59.724 04:05:34 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:59.724 04:05:34 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:59.724 04:05:34 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:59.983 Cannot find device "nvmf_tgt_br" 00:21:59.983 04:05:34 -- nvmf/common.sh@154 -- # true 00:21:59.983 04:05:34 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:59.983 Cannot find device "nvmf_tgt_br2" 00:21:59.983 04:05:34 -- nvmf/common.sh@155 -- # true 00:21:59.983 04:05:34 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:59.983 04:05:34 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:59.983 Cannot find device "nvmf_tgt_br" 00:21:59.983 04:05:34 -- nvmf/common.sh@157 -- # true 00:21:59.983 04:05:34 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:59.983 Cannot find device "nvmf_tgt_br2" 00:21:59.983 04:05:34 -- nvmf/common.sh@158 -- # true 00:21:59.983 04:05:34 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:59.983 04:05:34 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:59.983 04:05:34 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:59.983 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:59.983 04:05:34 -- nvmf/common.sh@161 -- # true 00:21:59.983 04:05:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:59.983 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:59.983 04:05:34 -- nvmf/common.sh@162 -- # true 00:21:59.983 04:05:34 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:59.983 04:05:34 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:59.983 04:05:34 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:59.983 04:05:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:59.983 04:05:34 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:59.983 04:05:34 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:59.983 04:05:35 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:59.983 04:05:35 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:59.983 04:05:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:59.983 04:05:35 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:59.983 04:05:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:59.983 04:05:35 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:59.983 04:05:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:59.983 04:05:35 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:59.983 04:05:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:59.983 04:05:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:59.983 04:05:35 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:59.983 04:05:35 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:59.983 04:05:35 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:59.983 04:05:35 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:00.243 04:05:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:00.243 04:05:35 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:00.243 04:05:35 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:00.243 04:05:35 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:00.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:00.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:22:00.243 00:22:00.243 --- 10.0.0.2 ping statistics --- 00:22:00.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.243 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:22:00.243 04:05:35 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:00.243 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:00.243 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:22:00.243 00:22:00.243 --- 10.0.0.3 ping statistics --- 00:22:00.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.243 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:22:00.243 04:05:35 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:00.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:00.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:22:00.243 00:22:00.243 --- 10.0.0.1 ping statistics --- 00:22:00.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.243 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:22:00.243 04:05:35 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:00.243 04:05:35 -- nvmf/common.sh@421 -- # return 0 00:22:00.243 04:05:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:00.243 04:05:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:00.243 04:05:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:00.243 04:05:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:00.243 04:05:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:00.243 04:05:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:00.243 04:05:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:00.243 04:05:35 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:00.243 04:05:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:00.243 04:05:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:00.243 04:05:35 -- common/autotest_common.sh@10 -- # set +x 00:22:00.243 04:05:35 -- nvmf/common.sh@469 -- # nvmfpid=84866 00:22:00.243 04:05:35 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:00.243 04:05:35 -- nvmf/common.sh@470 -- # waitforlisten 84866 00:22:00.243 04:05:35 -- common/autotest_common.sh@829 -- # '[' -z 84866 ']' 00:22:00.243 04:05:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.243 04:05:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:00.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.243 04:05:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.243 04:05:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:00.243 04:05:35 -- common/autotest_common.sh@10 -- # set +x 00:22:00.243 [2024-11-08 04:05:35.228119] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:00.243 [2024-11-08 04:05:35.228208] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:00.502 [2024-11-08 04:05:35.368685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:00.502 [2024-11-08 04:05:35.472213] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:00.502 [2024-11-08 04:05:35.472390] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.502 [2024-11-08 04:05:35.472407] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.502 [2024-11-08 04:05:35.472431] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.502 [2024-11-08 04:05:35.472558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:00.502 [2024-11-08 04:05:35.473489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:00.502 [2024-11-08 04:05:35.473503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.438 04:05:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:01.438 04:05:36 -- common/autotest_common.sh@862 -- # return 0 00:22:01.438 04:05:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:01.438 04:05:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:01.438 04:05:36 -- common/autotest_common.sh@10 -- # set +x 00:22:01.438 04:05:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:01.438 04:05:36 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:01.697 [2024-11-08 04:05:36.580072] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:01.697 04:05:36 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:01.956 Malloc0 00:22:01.956 04:05:36 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:02.215 04:05:37 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:02.474 04:05:37 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:02.474 [2024-11-08 04:05:37.579532] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:02.733 04:05:37 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:02.733 [2024-11-08 04:05:37.775673] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:02.733 04:05:37 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:02.991 [2024-11-08 04:05:37.968033] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:02.991 04:05:37 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:02.991 04:05:37 -- host/failover.sh@31 -- # bdevperf_pid=84979 00:22:02.991 04:05:37 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:02.991 04:05:37 -- host/failover.sh@34 -- # waitforlisten 84979 /var/tmp/bdevperf.sock 00:22:02.991 04:05:37 -- common/autotest_common.sh@829 -- # '[' -z 84979 ']' 00:22:02.991 04:05:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:02.991 04:05:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:02.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:02.991 04:05:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:02.991 04:05:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:02.991 04:05:37 -- common/autotest_common.sh@10 -- # set +x 00:22:04.369 04:05:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:04.369 04:05:39 -- common/autotest_common.sh@862 -- # return 0 00:22:04.369 04:05:39 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:04.369 NVMe0n1 00:22:04.369 04:05:39 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:04.628 00:22:04.628 04:05:39 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:04.628 04:05:39 -- host/failover.sh@39 -- # run_test_pid=85021 00:22:04.628 04:05:39 -- host/failover.sh@41 -- # sleep 1 00:22:06.004 04:05:40 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:06.004 [2024-11-08 04:05:40.877702] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.004 [2024-11-08 04:05:40.877750] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.004 [2024-11-08 04:05:40.877760] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.004 [2024-11-08 04:05:40.877768] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.004 [2024-11-08 04:05:40.877775] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.004 [2024-11-08 04:05:40.877782] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.004 [2024-11-08 04:05:40.877789] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.004 [2024-11-08 04:05:40.877796] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.004 [2024-11-08 04:05:40.877803] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.004 [2024-11-08 04:05:40.877810] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.004 [2024-11-08 04:05:40.877817] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.004 [2024-11-08 04:05:40.877824] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.004 [2024-11-08 04:05:40.877831] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.004 [2024-11-08 04:05:40.877840] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.004 [2024-11-08 04:05:40.877847] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.004 [2024-11-08 04:05:40.877863] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.004 [2024-11-08 04:05:40.877869] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.004 [2024-11-08 04:05:40.877877] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.004 [2024-11-08 04:05:40.877884] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.004 [2024-11-08 04:05:40.877890] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.004 [2024-11-08 04:05:40.877897] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.004 [2024-11-08 04:05:40.877904] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.004 [2024-11-08 04:05:40.877911] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.004 [2024-11-08 04:05:40.877918] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.005 [2024-11-08 04:05:40.877934] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.005 [2024-11-08 04:05:40.877941] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.005 [2024-11-08 04:05:40.877948] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.005 [2024-11-08 04:05:40.877958] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.005 [2024-11-08 04:05:40.877966] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.005 [2024-11-08 04:05:40.877973] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.005 [2024-11-08 04:05:40.877980] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.005 [2024-11-08 04:05:40.877988] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.005 [2024-11-08 04:05:40.877995] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.005 [2024-11-08 04:05:40.878002] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.005 [2024-11-08 04:05:40.878009] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.005 [2024-11-08 04:05:40.878016] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.005 [2024-11-08 04:05:40.878023] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.005 [2024-11-08 04:05:40.878032] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.005 [2024-11-08 04:05:40.878039] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.005 [2024-11-08 04:05:40.878047] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.005 [2024-11-08 04:05:40.878054] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.005 [2024-11-08 04:05:40.878060] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.005 [2024-11-08 04:05:40.878067] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.005 [2024-11-08 04:05:40.878074] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.005 [2024-11-08 04:05:40.878081] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.005 [2024-11-08 04:05:40.878088] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.005 [2024-11-08 04:05:40.878095] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dd5b0 is same with the state(5) to be set 00:22:06.005 04:05:40 -- host/failover.sh@45 -- # sleep 3 00:22:09.292 04:05:43 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:09.292 00:22:09.292 04:05:44 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:09.551 [2024-11-08 04:05:44.436910] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.436968] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.436978] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.436985] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.436993] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437000] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437008] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437016] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437023] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437031] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437039] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437045] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437053] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437060] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437068] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437075] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437082] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437088] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437095] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437102] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437109] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437116] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437123] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437130] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437137] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437144] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437151] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437159] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437167] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437174] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437183] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437190] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437198] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437206] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437214] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437222] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437229] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437236] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437243] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437251] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437258] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437265] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437272] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437279] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437286] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437292] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437299] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437307] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437315] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437322] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437329] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437337] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437344] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437351] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437359] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437366] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437373] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.551 [2024-11-08 04:05:44.437380] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.552 [2024-11-08 04:05:44.437387] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.552 [2024-11-08 04:05:44.437394] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.552 [2024-11-08 04:05:44.437402] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.552 [2024-11-08 04:05:44.437410] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.552 [2024-11-08 04:05:44.437418] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.552 [2024-11-08 04:05:44.437441] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de420 is same with the state(5) to be set 00:22:09.552 04:05:44 -- host/failover.sh@50 -- # sleep 3 00:22:12.842 04:05:47 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:12.842 [2024-11-08 04:05:47.655123] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:12.842 04:05:47 -- host/failover.sh@55 -- # sleep 1 00:22:13.777 04:05:48 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:14.039 [2024-11-08 04:05:48.927343] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927456] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927469] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927478] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927486] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927495] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927503] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927511] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927519] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927527] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927535] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927543] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927551] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927559] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927566] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927574] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927582] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927591] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927598] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927606] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927613] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927625] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927633] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927640] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927648] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927656] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927664] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927672] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927679] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927687] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927695] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927703] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927727] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927752] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927787] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927796] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927820] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927829] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927837] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927845] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927853] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927861] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927869] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927876] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927884] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927891] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927899] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927906] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927914] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927922] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927930] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927936] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927944] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927951] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927959] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927968] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.039 [2024-11-08 04:05:48.927976] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.040 [2024-11-08 04:05:48.927984] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.040 [2024-11-08 04:05:48.927991] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.040 [2024-11-08 04:05:48.927999] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.040 [2024-11-08 04:05:48.928007] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.040 [2024-11-08 04:05:48.928015] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.040 [2024-11-08 04:05:48.928023] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.040 [2024-11-08 04:05:48.928030] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15defb0 is same with the state(5) to be set 00:22:14.040 04:05:48 -- host/failover.sh@59 -- # wait 85021 00:22:20.642 0 00:22:20.642 04:05:54 -- host/failover.sh@61 -- # killprocess 84979 00:22:20.642 04:05:54 -- common/autotest_common.sh@936 -- # '[' -z 84979 ']' 00:22:20.642 04:05:54 -- common/autotest_common.sh@940 -- # kill -0 84979 00:22:20.642 04:05:54 -- common/autotest_common.sh@941 -- # uname 00:22:20.642 04:05:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:20.642 04:05:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84979 00:22:20.642 killing process with pid 84979 00:22:20.642 04:05:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:20.642 04:05:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:20.642 04:05:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84979' 00:22:20.642 04:05:54 -- common/autotest_common.sh@955 -- # kill 84979 00:22:20.642 04:05:54 -- common/autotest_common.sh@960 -- # wait 84979 00:22:20.642 04:05:55 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:20.642 [2024-11-08 04:05:38.027186] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:20.642 [2024-11-08 04:05:38.027266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84979 ] 00:22:20.642 [2024-11-08 04:05:38.158023] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.642 [2024-11-08 04:05:38.256559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.642 Running I/O for 15 seconds... 00:22:20.642 [2024-11-08 04:05:40.878395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.642 [2024-11-08 04:05:40.878465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.642 [2024-11-08 04:05:40.878492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.642 [2024-11-08 04:05:40.878506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.642 [2024-11-08 04:05:40.878519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.642 [2024-11-08 04:05:40.878531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.642 [2024-11-08 04:05:40.878545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.642 [2024-11-08 04:05:40.878557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.642 [2024-11-08 04:05:40.878570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.642 [2024-11-08 04:05:40.878582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.878595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.643 [2024-11-08 04:05:40.878606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.878619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.643 [2024-11-08 04:05:40.878630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.878644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.643 [2024-11-08 04:05:40.878656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.878668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.643 [2024-11-08 04:05:40.878680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.878692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.643 [2024-11-08 04:05:40.878703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.878716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.643 [2024-11-08 04:05:40.878728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.878760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.643 [2024-11-08 04:05:40.878773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.878785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.643 [2024-11-08 04:05:40.878803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.878816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.643 [2024-11-08 04:05:40.878827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.878840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.643 [2024-11-08 04:05:40.878855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.878867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.643 [2024-11-08 04:05:40.878879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.878892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.643 [2024-11-08 04:05:40.878910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.878923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.643 [2024-11-08 04:05:40.878935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.878947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.643 [2024-11-08 04:05:40.878959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.878973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.643 [2024-11-08 04:05:40.878985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.878997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.643 [2024-11-08 04:05:40.879009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.879023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.643 [2024-11-08 04:05:40.879035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.879048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.643 [2024-11-08 04:05:40.879059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.879072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.643 [2024-11-08 04:05:40.879090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.879104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.643 [2024-11-08 04:05:40.879115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.879128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.643 [2024-11-08 04:05:40.879140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.879152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.643 [2024-11-08 04:05:40.879164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.879176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.643 [2024-11-08 04:05:40.879187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.879200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.643 [2024-11-08 04:05:40.879212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.879225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.643 [2024-11-08 04:05:40.879237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.879249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.643 [2024-11-08 04:05:40.879262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.879274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.643 [2024-11-08 04:05:40.879286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.879298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.643 [2024-11-08 04:05:40.879314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.879327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.643 [2024-11-08 04:05:40.879339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.879352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.643 [2024-11-08 04:05:40.879363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.879376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.643 [2024-11-08 04:05:40.879388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.879401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.643 [2024-11-08 04:05:40.879435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.879451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.643 [2024-11-08 04:05:40.879462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.879475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.643 [2024-11-08 04:05:40.879487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.879499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.643 [2024-11-08 04:05:40.879511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.879524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.643 [2024-11-08 04:05:40.879536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.879549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.643 [2024-11-08 04:05:40.879560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.879573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.643 [2024-11-08 04:05:40.879584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.879596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.643 [2024-11-08 04:05:40.879608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.643 [2024-11-08 04:05:40.879620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.644 [2024-11-08 04:05:40.879631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.879644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.644 [2024-11-08 04:05:40.879656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.879668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.644 [2024-11-08 04:05:40.879680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.879693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.644 [2024-11-08 04:05:40.879722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.879735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.644 [2024-11-08 04:05:40.879752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.879772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.644 [2024-11-08 04:05:40.879785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.879799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.644 [2024-11-08 04:05:40.879825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.879840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.644 [2024-11-08 04:05:40.879862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.879876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.644 [2024-11-08 04:05:40.879888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.879901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.644 [2024-11-08 04:05:40.879914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.879928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.644 [2024-11-08 04:05:40.879940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.879953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.644 [2024-11-08 04:05:40.879965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.879979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.644 [2024-11-08 04:05:40.879992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.880005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.644 [2024-11-08 04:05:40.880027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.880041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.644 [2024-11-08 04:05:40.880053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.880066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.644 [2024-11-08 04:05:40.880078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.880092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.644 [2024-11-08 04:05:40.880104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.880118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.644 [2024-11-08 04:05:40.880136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.880150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.644 [2024-11-08 04:05:40.880162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.880176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.644 [2024-11-08 04:05:40.880187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.880201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.644 [2024-11-08 04:05:40.880217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.880231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.644 [2024-11-08 04:05:40.880243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.880256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.644 [2024-11-08 04:05:40.880273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.880286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.644 [2024-11-08 04:05:40.880298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.880312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.644 [2024-11-08 04:05:40.880324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.880337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.644 [2024-11-08 04:05:40.880348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.880362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.644 [2024-11-08 04:05:40.880373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.880387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.644 [2024-11-08 04:05:40.880398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.880412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.644 [2024-11-08 04:05:40.880423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.880462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.644 [2024-11-08 04:05:40.880475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.880489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.644 [2024-11-08 04:05:40.880508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.880522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.644 [2024-11-08 04:05:40.880534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.880547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.644 [2024-11-08 04:05:40.880559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.880573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.644 [2024-11-08 04:05:40.880585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.880598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.644 [2024-11-08 04:05:40.880610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.880623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.644 [2024-11-08 04:05:40.880635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.880648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.644 [2024-11-08 04:05:40.880665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.880679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.644 [2024-11-08 04:05:40.880692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.880706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.644 [2024-11-08 04:05:40.880723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.880736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.644 [2024-11-08 04:05:40.880748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.644 [2024-11-08 04:05:40.880762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.645 [2024-11-08 04:05:40.880774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.880787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.645 [2024-11-08 04:05:40.880799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.880822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.645 [2024-11-08 04:05:40.880842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.880869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.645 [2024-11-08 04:05:40.880882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.880895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.645 [2024-11-08 04:05:40.880907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.880920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.645 [2024-11-08 04:05:40.880932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.880945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.645 [2024-11-08 04:05:40.880957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.880970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.645 [2024-11-08 04:05:40.880982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.880995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.645 [2024-11-08 04:05:40.881007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.881020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.645 [2024-11-08 04:05:40.881032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.881045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.645 [2024-11-08 04:05:40.881057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.881070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.645 [2024-11-08 04:05:40.881082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.881095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.645 [2024-11-08 04:05:40.881112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.881125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.645 [2024-11-08 04:05:40.881138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.881150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.645 [2024-11-08 04:05:40.881169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.881183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.645 [2024-11-08 04:05:40.881201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.881214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.645 [2024-11-08 04:05:40.881226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.881239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.645 [2024-11-08 04:05:40.881252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.881265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.645 [2024-11-08 04:05:40.881277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.881290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.645 [2024-11-08 04:05:40.881302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.881316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.645 [2024-11-08 04:05:40.881328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.881341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.645 [2024-11-08 04:05:40.881353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.881367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.645 [2024-11-08 04:05:40.881379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.881392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.645 [2024-11-08 04:05:40.881404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.881429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.645 [2024-11-08 04:05:40.881444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.881479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.645 [2024-11-08 04:05:40.881493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.881506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.645 [2024-11-08 04:05:40.881518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.881531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.645 [2024-11-08 04:05:40.881543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.881562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.645 [2024-11-08 04:05:40.881581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.881595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.645 [2024-11-08 04:05:40.881607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.881620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.645 [2024-11-08 04:05:40.881637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.881650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.645 [2024-11-08 04:05:40.881663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.881676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.645 [2024-11-08 04:05:40.881688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.881708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.645 [2024-11-08 04:05:40.881719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.881733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.645 [2024-11-08 04:05:40.881745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.881758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.645 [2024-11-08 04:05:40.881770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.881792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.645 [2024-11-08 04:05:40.881804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.881822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.645 [2024-11-08 04:05:40.881833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.881847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.645 [2024-11-08 04:05:40.881858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.645 [2024-11-08 04:05:40.881871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.645 [2024-11-08 04:05:40.881883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.646 [2024-11-08 04:05:40.881897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.646 [2024-11-08 04:05:40.881909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.646 [2024-11-08 04:05:40.881928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.646 [2024-11-08 04:05:40.881941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.646 [2024-11-08 04:05:40.881954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.646 [2024-11-08 04:05:40.881967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.646 [2024-11-08 04:05:40.881980] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abe9a0 is same with the state(5) to be set 00:22:20.646 [2024-11-08 04:05:40.881996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:20.646 [2024-11-08 04:05:40.882006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:20.646 [2024-11-08 04:05:40.882020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2040 len:8 PRP1 0x0 PRP2 0x0 00:22:20.646 [2024-11-08 04:05:40.882032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.646 [2024-11-08 04:05:40.882097] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1abe9a0 was disconnected and freed. reset controller. 00:22:20.646 [2024-11-08 04:05:40.882114] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:20.646 [2024-11-08 04:05:40.882169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.646 [2024-11-08 04:05:40.882187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.646 [2024-11-08 04:05:40.882201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.646 [2024-11-08 04:05:40.882213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.646 [2024-11-08 04:05:40.882226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.646 [2024-11-08 04:05:40.882238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.646 [2024-11-08 04:05:40.882251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.646 [2024-11-08 04:05:40.882264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.646 [2024-11-08 04:05:40.882278] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:20.646 [2024-11-08 04:05:40.884443] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:20.646 [2024-11-08 04:05:40.884477] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a49440 (9): Bad file descriptor 00:22:20.646 [2024-11-08 04:05:40.908616] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:20.646 [2024-11-08 04:05:44.437608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:51248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.646 [2024-11-08 04:05:44.437641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.646 [2024-11-08 04:05:44.437666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.646 [2024-11-08 04:05:44.437689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.646 [2024-11-08 04:05:44.437717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:51280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.646 [2024-11-08 04:05:44.437731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.646 [2024-11-08 04:05:44.437745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:51296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.646 [2024-11-08 04:05:44.437757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.646 [2024-11-08 04:05:44.437771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:51304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.646 [2024-11-08 04:05:44.437783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.646 [2024-11-08 04:05:44.437796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.646 [2024-11-08 04:05:44.437808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.646 [2024-11-08 04:05:44.437821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:51328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.646 [2024-11-08 04:05:44.437833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.646 [2024-11-08 04:05:44.437846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.646 [2024-11-08 04:05:44.437860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.646 [2024-11-08 04:05:44.437873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.646 [2024-11-08 04:05:44.437885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.646 [2024-11-08 04:05:44.437900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:51344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.646 [2024-11-08 04:05:44.437912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.646 [2024-11-08 04:05:44.437926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.646 [2024-11-08 04:05:44.437952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.646 [2024-11-08 04:05:44.437964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:51360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.646 [2024-11-08 04:05:44.437976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.646 [2024-11-08 04:05:44.437989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:51368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.646 [2024-11-08 04:05:44.438000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.646 [2024-11-08 04:05:44.438012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.646 [2024-11-08 04:05:44.438027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.646 [2024-11-08 04:05:44.438039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:51392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.646 [2024-11-08 04:05:44.438060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.646 [2024-11-08 04:05:44.438073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:51400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.646 [2024-11-08 04:05:44.438085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.646 [2024-11-08 04:05:44.438098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:51416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.646 [2024-11-08 04:05:44.438112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.646 [2024-11-08 04:05:44.438125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:51424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.646 [2024-11-08 04:05:44.438136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.646 [2024-11-08 04:05:44.438148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:51448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.646 [2024-11-08 04:05:44.438160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.646 [2024-11-08 04:05:44.438173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.646 [2024-11-08 04:05:44.438185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.646 [2024-11-08 04:05:44.438197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.646 [2024-11-08 04:05:44.438209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.646 [2024-11-08 04:05:44.438221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:51496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.646 [2024-11-08 04:05:44.438232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.646 [2024-11-08 04:05:44.438244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:51512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.646 [2024-11-08 04:05:44.438256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.646 [2024-11-08 04:05:44.438268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:51520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.646 [2024-11-08 04:05:44.438279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.646 [2024-11-08 04:05:44.438292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:51528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.646 [2024-11-08 04:05:44.438302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.646 [2024-11-08 04:05:44.438315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:51984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.646 [2024-11-08 04:05:44.438325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.646 [2024-11-08 04:05:44.438338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:52016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.646 [2024-11-08 04:05:44.438349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.646 [2024-11-08 04:05:44.438367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.646 [2024-11-08 04:05:44.438379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.647 [2024-11-08 04:05:44.438392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:52032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.647 [2024-11-08 04:05:44.438403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.647 [2024-11-08 04:05:44.438415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:52048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.647 [2024-11-08 04:05:44.438436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.647 [2024-11-08 04:05:44.438448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:52064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.647 [2024-11-08 04:05:44.438473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.647 [2024-11-08 04:05:44.438487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:52072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.647 [2024-11-08 04:05:44.438499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.647 [2024-11-08 04:05:44.438513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:52080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.647 [2024-11-08 04:05:44.438531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.647 [2024-11-08 04:05:44.438544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:52088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.647 [2024-11-08 04:05:44.438556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.647 [2024-11-08 04:05:44.438568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:52096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.647 [2024-11-08 04:05:44.438580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.647 [2024-11-08 04:05:44.438592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:52104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.647 [2024-11-08 04:05:44.438604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.647 [2024-11-08 04:05:44.438616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:52112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.647 [2024-11-08 04:05:44.438627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.647 [2024-11-08 04:05:44.438640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:52120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.647 [2024-11-08 04:05:44.438651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.647 [2024-11-08 04:05:44.438664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:51536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.647 [2024-11-08 04:05:44.438675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.647 [2024-11-08 04:05:44.438687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:51560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.647 [2024-11-08 04:05:44.438705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.647 [2024-11-08 04:05:44.438718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:51656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.647 [2024-11-08 04:05:44.438730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.647 [2024-11-08 04:05:44.438743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:51672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.647 [2024-11-08 04:05:44.438754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.647 [2024-11-08 04:05:44.438767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:51688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.647 [2024-11-08 04:05:44.438778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.647 [2024-11-08 04:05:44.438791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:51704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.647 [2024-11-08 04:05:44.438802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.647 [2024-11-08 04:05:44.438815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:51720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.647 [2024-11-08 04:05:44.438826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.647 [2024-11-08 04:05:44.438839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:51728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.647 [2024-11-08 04:05:44.438850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.647 [2024-11-08 04:05:44.438862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:52128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.647 [2024-11-08 04:05:44.438874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.647 [2024-11-08 04:05:44.438886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:52136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.647 [2024-11-08 04:05:44.438898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.647 [2024-11-08 04:05:44.438911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:52144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.647 [2024-11-08 04:05:44.438928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.647 [2024-11-08 04:05:44.438941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:52152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.647 [2024-11-08 04:05:44.438953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.647 [2024-11-08 04:05:44.438965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:52160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.647 [2024-11-08 04:05:44.438978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.647 [2024-11-08 04:05:44.438992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:52168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.647 [2024-11-08 04:05:44.439003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.647 [2024-11-08 04:05:44.439016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:52176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.647 [2024-11-08 04:05:44.439033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.647 [2024-11-08 04:05:44.439046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:52184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.647 [2024-11-08 04:05:44.439057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.647 [2024-11-08 04:05:44.439070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:52192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.647 [2024-11-08 04:05:44.439081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.647 [2024-11-08 04:05:44.439093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:52200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.647 [2024-11-08 04:05:44.439105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.647 [2024-11-08 04:05:44.439117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:52208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.647 [2024-11-08 04:05:44.439128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.647 [2024-11-08 04:05:44.439141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:52216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.647 [2024-11-08 04:05:44.439152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.647 [2024-11-08 04:05:44.439165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:52224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.647 [2024-11-08 04:05:44.439176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.647 [2024-11-08 04:05:44.439189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:52232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.647 [2024-11-08 04:05:44.439200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.647 [2024-11-08 04:05:44.439212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:51744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.647 [2024-11-08 04:05:44.439223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.647 [2024-11-08 04:05:44.439236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:51768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.647 [2024-11-08 04:05:44.439247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.439259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:51784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.648 [2024-11-08 04:05:44.439270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.439283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:51832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.648 [2024-11-08 04:05:44.439294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.439307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:51840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.648 [2024-11-08 04:05:44.439322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.439341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:51848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.648 [2024-11-08 04:05:44.439352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.439365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:51864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.648 [2024-11-08 04:05:44.439376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.439388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:51872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.648 [2024-11-08 04:05:44.439399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.439411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:52240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.648 [2024-11-08 04:05:44.439438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.439453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:52248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.648 [2024-11-08 04:05:44.439464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.439476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:52256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.648 [2024-11-08 04:05:44.439487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.439500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:52264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.648 [2024-11-08 04:05:44.439511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.439524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:52272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.648 [2024-11-08 04:05:44.439535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.439547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:52280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.648 [2024-11-08 04:05:44.439558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.439571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:52288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.648 [2024-11-08 04:05:44.439582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.439594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.648 [2024-11-08 04:05:44.439606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.439618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:52304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.648 [2024-11-08 04:05:44.439630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.439642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:52312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.648 [2024-11-08 04:05:44.439663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.439677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:52320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.648 [2024-11-08 04:05:44.439689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.439701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:52328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.648 [2024-11-08 04:05:44.439712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.439725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:52336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.648 [2024-11-08 04:05:44.439741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.439753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:52344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.648 [2024-11-08 04:05:44.439765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.439787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:52352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.648 [2024-11-08 04:05:44.439798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.439810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:52360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.648 [2024-11-08 04:05:44.439821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.439833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:52368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.648 [2024-11-08 04:05:44.439845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.439857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:52376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.648 [2024-11-08 04:05:44.439868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.439880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:51888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.648 [2024-11-08 04:05:44.439892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.439904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:51896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.648 [2024-11-08 04:05:44.439916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.439928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:51904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.648 [2024-11-08 04:05:44.439939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.439951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:51912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.648 [2024-11-08 04:05:44.439963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.439981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:51920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.648 [2024-11-08 04:05:44.439993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.440005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.648 [2024-11-08 04:05:44.440017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.440029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:51944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.648 [2024-11-08 04:05:44.440040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.440053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:51952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.648 [2024-11-08 04:05:44.440063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.440076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:52384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.648 [2024-11-08 04:05:44.440087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.440099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:52392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.648 [2024-11-08 04:05:44.440110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.440123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:52400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.648 [2024-11-08 04:05:44.440139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.440152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:52408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.648 [2024-11-08 04:05:44.440164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.440176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:52416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.648 [2024-11-08 04:05:44.440187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.440200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:52424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.648 [2024-11-08 04:05:44.440211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.440224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:52432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.648 [2024-11-08 04:05:44.440234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.648 [2024-11-08 04:05:44.440248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:52440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.649 [2024-11-08 04:05:44.440260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.649 [2024-11-08 04:05:44.440273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:52448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.649 [2024-11-08 04:05:44.440284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.649 [2024-11-08 04:05:44.440302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:52456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.649 [2024-11-08 04:05:44.440314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.649 [2024-11-08 04:05:44.440326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:52464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.649 [2024-11-08 04:05:44.440337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.649 [2024-11-08 04:05:44.440350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:52472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.649 [2024-11-08 04:05:44.440362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.649 [2024-11-08 04:05:44.440374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:52480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.649 [2024-11-08 04:05:44.440385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.649 [2024-11-08 04:05:44.440398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:52488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.649 [2024-11-08 04:05:44.440409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.649 [2024-11-08 04:05:44.440438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:52496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.649 [2024-11-08 04:05:44.440451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.649 [2024-11-08 04:05:44.440464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:52504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.649 [2024-11-08 04:05:44.440476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.649 [2024-11-08 04:05:44.440488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:52512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.649 [2024-11-08 04:05:44.440499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.649 [2024-11-08 04:05:44.440512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.649 [2024-11-08 04:05:44.440523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.649 [2024-11-08 04:05:44.440535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:52528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.649 [2024-11-08 04:05:44.440551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.649 [2024-11-08 04:05:44.440564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:52536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.649 [2024-11-08 04:05:44.440575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.649 [2024-11-08 04:05:44.440588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:52544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.649 [2024-11-08 04:05:44.440599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.649 [2024-11-08 04:05:44.440611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:52552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.649 [2024-11-08 04:05:44.440628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.649 [2024-11-08 04:05:44.440641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:52560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.649 [2024-11-08 04:05:44.440653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.649 [2024-11-08 04:05:44.440665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:52568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.649 [2024-11-08 04:05:44.440677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.649 [2024-11-08 04:05:44.440689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:52576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.649 [2024-11-08 04:05:44.440700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.649 [2024-11-08 04:05:44.440713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:52584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.649 [2024-11-08 04:05:44.440724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.649 [2024-11-08 04:05:44.440738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:51960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.649 [2024-11-08 04:05:44.440749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.649 [2024-11-08 04:05:44.440761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:51968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.649 [2024-11-08 04:05:44.440772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.649 [2024-11-08 04:05:44.440784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:51976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.649 [2024-11-08 04:05:44.440797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.649 [2024-11-08 04:05:44.440809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:51992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.649 [2024-11-08 04:05:44.440820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.649 [2024-11-08 04:05:44.440832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:52000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.649 [2024-11-08 04:05:44.440843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.649 [2024-11-08 04:05:44.440856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:52008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.649 [2024-11-08 04:05:44.440867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.649 [2024-11-08 04:05:44.440879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:52040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.649 [2024-11-08 04:05:44.440890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.649 [2024-11-08 04:05:44.440902] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0890 is same with the state(5) to be set 00:22:20.649 [2024-11-08 04:05:44.440915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:20.649 [2024-11-08 04:05:44.440929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:20.649 [2024-11-08 04:05:44.440943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52056 len:8 PRP1 0x0 PRP2 0x0 00:22:20.649 [2024-11-08 04:05:44.440955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.649 [2024-11-08 04:05:44.440989] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ac0890 was disconnected and freed. reset controller. 00:22:20.649 [2024-11-08 04:05:44.441004] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:20.649 [2024-11-08 04:05:44.441048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.649 [2024-11-08 04:05:44.441067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.649 [2024-11-08 04:05:44.441079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.649 [2024-11-08 04:05:44.441090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.649 [2024-11-08 04:05:44.441101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.649 [2024-11-08 04:05:44.441112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.649 [2024-11-08 04:05:44.441123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.649 [2024-11-08 04:05:44.441134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.649 [2024-11-08 04:05:44.441144] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:20.649 [2024-11-08 04:05:44.441169] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a49440 (9): Bad file descriptor 00:22:20.649 [2024-11-08 04:05:44.442957] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:20.649 [2024-11-08 04:05:44.458868] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:20.649 [2024-11-08 04:05:48.928138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:84600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.649 [2024-11-08 04:05:48.928191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.649 [2024-11-08 04:05:48.928213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:84616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.649 [2024-11-08 04:05:48.928229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.649 [2024-11-08 04:05:48.928243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:84624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.649 [2024-11-08 04:05:48.928256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.649 [2024-11-08 04:05:48.928269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.649 [2024-11-08 04:05:48.928281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.649 [2024-11-08 04:05:48.928295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:84672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.649 [2024-11-08 04:05:48.928308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.649 [2024-11-08 04:05:48.928335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.649 [2024-11-08 04:05:48.928348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.928361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.650 [2024-11-08 04:05:48.928374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.928387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:84744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.650 [2024-11-08 04:05:48.928400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.928413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:84760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.650 [2024-11-08 04:05:48.928441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.928481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:84784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.650 [2024-11-08 04:05:48.928495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.928509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:84800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.650 [2024-11-08 04:05:48.928520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.928533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.650 [2024-11-08 04:05:48.928546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.928559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.650 [2024-11-08 04:05:48.928571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.928584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:84880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.650 [2024-11-08 04:05:48.928596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.928609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.650 [2024-11-08 04:05:48.928628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.928641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.650 [2024-11-08 04:05:48.928653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.928667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:85304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.650 [2024-11-08 04:05:48.928681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.928695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:85328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.650 [2024-11-08 04:05:48.928715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.928731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:85360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.650 [2024-11-08 04:05:48.928743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.928756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:85376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.650 [2024-11-08 04:05:48.928788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.928806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:85384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.650 [2024-11-08 04:05:48.928828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.928841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:85392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.650 [2024-11-08 04:05:48.928853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.928866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:85400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.650 [2024-11-08 04:05:48.928878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.928892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:85408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.650 [2024-11-08 04:05:48.928903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.928917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.650 [2024-11-08 04:05:48.928929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.928942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.650 [2024-11-08 04:05:48.928954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.928967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.650 [2024-11-08 04:05:48.928979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.928992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:85448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.650 [2024-11-08 04:05:48.929005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.929018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.650 [2024-11-08 04:05:48.929030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.929043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:85464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.650 [2024-11-08 04:05:48.929055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.929069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:85472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.650 [2024-11-08 04:05:48.929088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.929102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.650 [2024-11-08 04:05:48.929114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.929127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.650 [2024-11-08 04:05:48.929141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.929155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:85496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.650 [2024-11-08 04:05:48.929167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.929180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.650 [2024-11-08 04:05:48.929192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.929205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.650 [2024-11-08 04:05:48.929217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.929231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.650 [2024-11-08 04:05:48.929242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.929256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.650 [2024-11-08 04:05:48.929267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.929281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.650 [2024-11-08 04:05:48.929293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.929306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:85544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.650 [2024-11-08 04:05:48.929318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.929331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.650 [2024-11-08 04:05:48.929343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.929356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.650 [2024-11-08 04:05:48.929368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.929381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.650 [2024-11-08 04:05:48.929392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.929411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:85576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.650 [2024-11-08 04:05:48.929477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.929498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:85584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.650 [2024-11-08 04:05:48.929511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.650 [2024-11-08 04:05:48.929525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:85592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.650 [2024-11-08 04:05:48.929537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.651 [2024-11-08 04:05:48.929551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.651 [2024-11-08 04:05:48.929563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.651 [2024-11-08 04:05:48.929577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.651 [2024-11-08 04:05:48.929589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.651 [2024-11-08 04:05:48.929603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.651 [2024-11-08 04:05:48.929616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.651 [2024-11-08 04:05:48.929630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:85624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.651 [2024-11-08 04:05:48.929642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.651 [2024-11-08 04:05:48.929656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:85632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.651 [2024-11-08 04:05:48.929668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.651 [2024-11-08 04:05:48.929682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.651 [2024-11-08 04:05:48.929694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.651 [2024-11-08 04:05:48.929707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:85648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.651 [2024-11-08 04:05:48.929719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.651 [2024-11-08 04:05:48.929733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:85656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.651 [2024-11-08 04:05:48.929746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.651 [2024-11-08 04:05:48.929760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:85664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.651 [2024-11-08 04:05:48.929788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.651 [2024-11-08 04:05:48.929801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:85672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.651 [2024-11-08 04:05:48.929832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.651 [2024-11-08 04:05:48.929845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:85680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.651 [2024-11-08 04:05:48.929858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.651 [2024-11-08 04:05:48.929871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.651 [2024-11-08 04:05:48.929894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.651 [2024-11-08 04:05:48.929906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:84928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.651 [2024-11-08 04:05:48.929919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.651 [2024-11-08 04:05:48.929933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:84936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.651 [2024-11-08 04:05:48.929945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.651 [2024-11-08 04:05:48.929958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:84952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.651 [2024-11-08 04:05:48.929970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.651 [2024-11-08 04:05:48.929983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:84968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.651 [2024-11-08 04:05:48.929995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.651 [2024-11-08 04:05:48.930008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:84984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.651 [2024-11-08 04:05:48.930020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.651 [2024-11-08 04:05:48.930033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.651 [2024-11-08 04:05:48.930045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.651 [2024-11-08 04:05:48.930059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:85024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.651 [2024-11-08 04:05:48.930071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.651 [2024-11-08 04:05:48.930084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:85048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.651 [2024-11-08 04:05:48.930096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.651 [2024-11-08 04:05:48.930109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:85056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.651 [2024-11-08 04:05:48.930121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.651 [2024-11-08 04:05:48.930135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:85088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.651 [2024-11-08 04:05:48.930147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.651 [2024-11-08 04:05:48.930167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:85112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.651 [2024-11-08 04:05:48.930180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.651 [2024-11-08 04:05:48.930194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:85136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.651 [2024-11-08 04:05:48.930206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.651 [2024-11-08 04:05:48.930220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:85144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.651 [2024-11-08 04:05:48.930232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.651 [2024-11-08 04:05:48.930244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:85152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.651 [2024-11-08 04:05:48.930256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.651 [2024-11-08 04:05:48.930269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:85160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.651 [2024-11-08 04:05:48.930282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.651 [2024-11-08 04:05:48.930295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:85176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.651 [2024-11-08 04:05:48.930307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.651 [2024-11-08 04:05:48.930320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.651 [2024-11-08 04:05:48.930332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.651 [2024-11-08 04:05:48.930346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.651 [2024-11-08 04:05:48.930359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.651 [2024-11-08 04:05:48.930372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:85712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.651 [2024-11-08 04:05:48.930384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.651 [2024-11-08 04:05:48.930397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:85720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.651 [2024-11-08 04:05:48.930409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.651 [2024-11-08 04:05:48.930423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:85728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.651 [2024-11-08 04:05:48.930486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.651 [2024-11-08 04:05:48.930502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:85184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.651 [2024-11-08 04:05:48.930515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.651 [2024-11-08 04:05:48.930528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:85192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.651 [2024-11-08 04:05:48.930548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.930563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:85200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.652 [2024-11-08 04:05:48.930576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.930590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:85216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.652 [2024-11-08 04:05:48.930602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.930616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:85240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.652 [2024-11-08 04:05:48.930628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.930642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:85256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.652 [2024-11-08 04:05:48.930654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.930668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:85264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.652 [2024-11-08 04:05:48.930680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.930694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:85272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.652 [2024-11-08 04:05:48.930706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.930720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:85736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.652 [2024-11-08 04:05:48.930732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.930747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.652 [2024-11-08 04:05:48.930759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.930789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.652 [2024-11-08 04:05:48.930813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.930826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.652 [2024-11-08 04:05:48.930850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.930890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:85768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.652 [2024-11-08 04:05:48.930901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.930914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.652 [2024-11-08 04:05:48.930925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.930938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.652 [2024-11-08 04:05:48.930956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.930969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:85792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.652 [2024-11-08 04:05:48.930987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.931000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:85800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.652 [2024-11-08 04:05:48.931012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.931025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:85808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.652 [2024-11-08 04:05:48.931037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.931049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:85816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.652 [2024-11-08 04:05:48.931060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.931073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:85824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.652 [2024-11-08 04:05:48.931085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.931097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:85832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.652 [2024-11-08 04:05:48.931109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.931122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.652 [2024-11-08 04:05:48.931133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.931145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.652 [2024-11-08 04:05:48.931158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.931170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:85856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.652 [2024-11-08 04:05:48.931181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.931194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:85864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.652 [2024-11-08 04:05:48.931206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.931219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:85872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.652 [2024-11-08 04:05:48.931230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.931243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:85880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.652 [2024-11-08 04:05:48.931255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.931274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:85888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.652 [2024-11-08 04:05:48.931286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.931299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:85896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.652 [2024-11-08 04:05:48.931311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.931324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:85904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.652 [2024-11-08 04:05:48.931336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.931348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:85912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.652 [2024-11-08 04:05:48.931360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.931372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.652 [2024-11-08 04:05:48.931390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.931402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.652 [2024-11-08 04:05:48.931414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.931427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.652 [2024-11-08 04:05:48.931438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.931477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.652 [2024-11-08 04:05:48.931491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.931505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.652 [2024-11-08 04:05:48.931517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.931530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.652 [2024-11-08 04:05:48.931542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.931556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.652 [2024-11-08 04:05:48.931568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.931582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:85976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.652 [2024-11-08 04:05:48.931594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.931607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:85984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.652 [2024-11-08 04:05:48.931626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.931640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.652 [2024-11-08 04:05:48.931653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.652 [2024-11-08 04:05:48.931666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:85296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.652 [2024-11-08 04:05:48.931679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.653 [2024-11-08 04:05:48.931692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:85312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.653 [2024-11-08 04:05:48.931704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.653 [2024-11-08 04:05:48.931717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:85320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.653 [2024-11-08 04:05:48.931729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.653 [2024-11-08 04:05:48.931743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:85336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.653 [2024-11-08 04:05:48.931755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.653 [2024-11-08 04:05:48.931768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:85344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.653 [2024-11-08 04:05:48.931795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.653 [2024-11-08 04:05:48.931808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:85352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.653 [2024-11-08 04:05:48.931820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.653 [2024-11-08 04:05:48.931832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.653 [2024-11-08 04:05:48.931850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.653 [2024-11-08 04:05:48.931864] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abbc80 is same with the state(5) to be set 00:22:20.653 [2024-11-08 04:05:48.931878] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:20.653 [2024-11-08 04:05:48.931887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:20.653 [2024-11-08 04:05:48.931897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85416 len:8 PRP1 0x0 PRP2 0x0 00:22:20.653 [2024-11-08 04:05:48.931908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.653 [2024-11-08 04:05:48.931956] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1abbc80 was disconnected and freed. reset controller. 00:22:20.653 [2024-11-08 04:05:48.931973] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:20.653 [2024-11-08 04:05:48.932019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.653 [2024-11-08 04:05:48.932037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.653 [2024-11-08 04:05:48.932061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.653 [2024-11-08 04:05:48.932074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.653 [2024-11-08 04:05:48.932086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.653 [2024-11-08 04:05:48.932105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.653 [2024-11-08 04:05:48.932116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.653 [2024-11-08 04:05:48.932128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.653 [2024-11-08 04:05:48.932139] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:20.653 [2024-11-08 04:05:48.932176] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a49440 (9): Bad file descriptor 00:22:20.653 [2024-11-08 04:05:48.934143] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:20.653 [2024-11-08 04:05:48.951454] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:20.653 00:22:20.653 Latency(us) 00:22:20.653 [2024-11-08T04:05:55.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:20.653 [2024-11-08T04:05:55.764Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:20.653 Verification LBA range: start 0x0 length 0x4000 00:22:20.653 NVMe0n1 : 15.01 15242.86 59.54 234.91 0.00 8255.50 547.37 14000.87 00:22:20.653 [2024-11-08T04:05:55.764Z] =================================================================================================================== 00:22:20.653 [2024-11-08T04:05:55.764Z] Total : 15242.86 59.54 234.91 0.00 8255.50 547.37 14000.87 00:22:20.653 Received shutdown signal, test time was about 15.000000 seconds 00:22:20.653 00:22:20.653 Latency(us) 00:22:20.653 [2024-11-08T04:05:55.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:20.653 [2024-11-08T04:05:55.764Z] =================================================================================================================== 00:22:20.653 [2024-11-08T04:05:55.764Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:20.653 04:05:55 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:20.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:20.653 04:05:55 -- host/failover.sh@65 -- # count=3 00:22:20.653 04:05:55 -- host/failover.sh@67 -- # (( count != 3 )) 00:22:20.653 04:05:55 -- host/failover.sh@73 -- # bdevperf_pid=85226 00:22:20.653 04:05:55 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:20.653 04:05:55 -- host/failover.sh@75 -- # waitforlisten 85226 /var/tmp/bdevperf.sock 00:22:20.653 04:05:55 -- common/autotest_common.sh@829 -- # '[' -z 85226 ']' 00:22:20.653 04:05:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:20.653 04:05:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:20.653 04:05:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:20.653 04:05:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:20.653 04:05:55 -- common/autotest_common.sh@10 -- # set +x 00:22:21.220 04:05:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:21.220 04:05:56 -- common/autotest_common.sh@862 -- # return 0 00:22:21.220 04:05:56 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:21.479 [2024-11-08 04:05:56.386213] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:21.479 04:05:56 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:21.738 [2024-11-08 04:05:56.682647] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:21.738 04:05:56 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:21.997 NVMe0n1 00:22:21.997 04:05:57 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:22.256 00:22:22.514 04:05:57 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:22.773 00:22:22.773 04:05:57 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:22.773 04:05:57 -- host/failover.sh@82 -- # grep -q NVMe0 00:22:22.773 04:05:57 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:23.340 04:05:58 -- host/failover.sh@87 -- # sleep 3 00:22:26.626 04:06:01 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:26.626 04:06:01 -- host/failover.sh@88 -- # grep -q NVMe0 00:22:26.626 04:06:01 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:26.626 04:06:01 -- host/failover.sh@90 -- # run_test_pid=85369 00:22:26.626 04:06:01 -- host/failover.sh@92 -- # wait 85369 00:22:27.563 0 00:22:27.563 04:06:02 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:27.563 [2024-11-08 04:05:55.183958] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:27.563 [2024-11-08 04:05:55.184051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85226 ] 00:22:27.563 [2024-11-08 04:05:55.307197] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.563 [2024-11-08 04:05:55.387760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.563 [2024-11-08 04:05:58.129263] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:27.563 [2024-11-08 04:05:58.129358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.563 [2024-11-08 04:05:58.129380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.563 [2024-11-08 04:05:58.129394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.563 [2024-11-08 04:05:58.129406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.563 [2024-11-08 04:05:58.129429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.563 [2024-11-08 04:05:58.129443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.563 [2024-11-08 04:05:58.129470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.563 [2024-11-08 04:05:58.129486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.563 [2024-11-08 04:05:58.129497] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:27.563 [2024-11-08 04:05:58.129533] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:27.563 [2024-11-08 04:05:58.129559] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd43440 (9): Bad file descriptor 00:22:27.563 [2024-11-08 04:05:58.133296] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:27.563 Running I/O for 1 seconds... 00:22:27.563 00:22:27.563 Latency(us) 00:22:27.564 [2024-11-08T04:06:02.675Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.564 [2024-11-08T04:06:02.675Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:27.564 Verification LBA range: start 0x0 length 0x4000 00:22:27.564 NVMe0n1 : 1.01 14233.90 55.60 0.00 0.00 8949.88 1489.45 14954.12 00:22:27.564 [2024-11-08T04:06:02.675Z] =================================================================================================================== 00:22:27.564 [2024-11-08T04:06:02.675Z] Total : 14233.90 55.60 0.00 0.00 8949.88 1489.45 14954.12 00:22:27.564 04:06:02 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:27.564 04:06:02 -- host/failover.sh@95 -- # grep -q NVMe0 00:22:27.823 04:06:02 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:28.082 04:06:03 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:28.082 04:06:03 -- host/failover.sh@99 -- # grep -q NVMe0 00:22:28.340 04:06:03 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:28.599 04:06:03 -- host/failover.sh@101 -- # sleep 3 00:22:31.886 04:06:06 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:31.886 04:06:06 -- host/failover.sh@103 -- # grep -q NVMe0 00:22:31.886 04:06:06 -- host/failover.sh@108 -- # killprocess 85226 00:22:31.886 04:06:06 -- common/autotest_common.sh@936 -- # '[' -z 85226 ']' 00:22:31.886 04:06:06 -- common/autotest_common.sh@940 -- # kill -0 85226 00:22:31.886 04:06:06 -- common/autotest_common.sh@941 -- # uname 00:22:31.886 04:06:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:31.886 04:06:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85226 00:22:31.886 04:06:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:31.886 04:06:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:31.886 04:06:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85226' 00:22:31.886 killing process with pid 85226 00:22:31.886 04:06:06 -- common/autotest_common.sh@955 -- # kill 85226 00:22:31.886 04:06:06 -- common/autotest_common.sh@960 -- # wait 85226 00:22:32.145 04:06:07 -- host/failover.sh@110 -- # sync 00:22:32.145 04:06:07 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:32.403 04:06:07 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:32.403 04:06:07 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:32.403 04:06:07 -- host/failover.sh@116 -- # nvmftestfini 00:22:32.403 04:06:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:32.403 04:06:07 -- nvmf/common.sh@116 -- # sync 00:22:32.404 04:06:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:32.404 04:06:07 -- nvmf/common.sh@119 -- # set +e 00:22:32.404 04:06:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:32.404 04:06:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:32.404 rmmod nvme_tcp 00:22:32.404 rmmod nvme_fabrics 00:22:32.404 rmmod nvme_keyring 00:22:32.404 04:06:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:32.404 04:06:07 -- nvmf/common.sh@123 -- # set -e 00:22:32.404 04:06:07 -- nvmf/common.sh@124 -- # return 0 00:22:32.404 04:06:07 -- nvmf/common.sh@477 -- # '[' -n 84866 ']' 00:22:32.404 04:06:07 -- nvmf/common.sh@478 -- # killprocess 84866 00:22:32.404 04:06:07 -- common/autotest_common.sh@936 -- # '[' -z 84866 ']' 00:22:32.404 04:06:07 -- common/autotest_common.sh@940 -- # kill -0 84866 00:22:32.404 04:06:07 -- common/autotest_common.sh@941 -- # uname 00:22:32.404 04:06:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:32.404 04:06:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84866 00:22:32.404 killing process with pid 84866 00:22:32.404 04:06:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:32.404 04:06:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:32.404 04:06:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84866' 00:22:32.404 04:06:07 -- common/autotest_common.sh@955 -- # kill 84866 00:22:32.404 04:06:07 -- common/autotest_common.sh@960 -- # wait 84866 00:22:32.663 04:06:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:32.663 04:06:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:32.663 04:06:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:32.663 04:06:07 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:32.663 04:06:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:32.663 04:06:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.663 04:06:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:32.663 04:06:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.921 04:06:07 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:32.921 00:22:32.921 real 0m33.160s 00:22:32.921 user 2m8.252s 00:22:32.921 sys 0m4.927s 00:22:32.921 04:06:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:32.921 04:06:07 -- common/autotest_common.sh@10 -- # set +x 00:22:32.921 ************************************ 00:22:32.921 END TEST nvmf_failover 00:22:32.921 ************************************ 00:22:32.921 04:06:07 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:32.921 04:06:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:32.921 04:06:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:32.921 04:06:07 -- common/autotest_common.sh@10 -- # set +x 00:22:32.921 ************************************ 00:22:32.921 START TEST nvmf_discovery 00:22:32.921 ************************************ 00:22:32.921 04:06:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:32.921 * Looking for test storage... 00:22:32.921 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:32.921 04:06:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:32.921 04:06:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:32.921 04:06:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:32.921 04:06:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:32.921 04:06:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:32.921 04:06:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:32.921 04:06:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:32.921 04:06:08 -- scripts/common.sh@335 -- # IFS=.-: 00:22:32.921 04:06:08 -- scripts/common.sh@335 -- # read -ra ver1 00:22:32.921 04:06:08 -- scripts/common.sh@336 -- # IFS=.-: 00:22:32.921 04:06:08 -- scripts/common.sh@336 -- # read -ra ver2 00:22:32.921 04:06:08 -- scripts/common.sh@337 -- # local 'op=<' 00:22:32.921 04:06:08 -- scripts/common.sh@339 -- # ver1_l=2 00:22:32.921 04:06:08 -- scripts/common.sh@340 -- # ver2_l=1 00:22:32.921 04:06:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:32.921 04:06:08 -- scripts/common.sh@343 -- # case "$op" in 00:22:32.921 04:06:08 -- scripts/common.sh@344 -- # : 1 00:22:32.921 04:06:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:32.921 04:06:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:32.921 04:06:08 -- scripts/common.sh@364 -- # decimal 1 00:22:32.921 04:06:08 -- scripts/common.sh@352 -- # local d=1 00:22:32.921 04:06:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:32.921 04:06:08 -- scripts/common.sh@354 -- # echo 1 00:22:32.921 04:06:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:32.921 04:06:08 -- scripts/common.sh@365 -- # decimal 2 00:22:33.180 04:06:08 -- scripts/common.sh@352 -- # local d=2 00:22:33.180 04:06:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:33.180 04:06:08 -- scripts/common.sh@354 -- # echo 2 00:22:33.180 04:06:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:33.180 04:06:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:33.180 04:06:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:33.180 04:06:08 -- scripts/common.sh@367 -- # return 0 00:22:33.180 04:06:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:33.180 04:06:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:33.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.180 --rc genhtml_branch_coverage=1 00:22:33.180 --rc genhtml_function_coverage=1 00:22:33.180 --rc genhtml_legend=1 00:22:33.180 --rc geninfo_all_blocks=1 00:22:33.180 --rc geninfo_unexecuted_blocks=1 00:22:33.180 00:22:33.180 ' 00:22:33.180 04:06:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:33.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.180 --rc genhtml_branch_coverage=1 00:22:33.180 --rc genhtml_function_coverage=1 00:22:33.180 --rc genhtml_legend=1 00:22:33.180 --rc geninfo_all_blocks=1 00:22:33.180 --rc geninfo_unexecuted_blocks=1 00:22:33.180 00:22:33.180 ' 00:22:33.180 04:06:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:33.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.180 --rc genhtml_branch_coverage=1 00:22:33.180 --rc genhtml_function_coverage=1 00:22:33.180 --rc genhtml_legend=1 00:22:33.180 --rc geninfo_all_blocks=1 00:22:33.180 --rc geninfo_unexecuted_blocks=1 00:22:33.180 00:22:33.180 ' 00:22:33.180 04:06:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:33.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.180 --rc genhtml_branch_coverage=1 00:22:33.180 --rc genhtml_function_coverage=1 00:22:33.180 --rc genhtml_legend=1 00:22:33.180 --rc geninfo_all_blocks=1 00:22:33.180 --rc geninfo_unexecuted_blocks=1 00:22:33.180 00:22:33.180 ' 00:22:33.180 04:06:08 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:33.180 04:06:08 -- nvmf/common.sh@7 -- # uname -s 00:22:33.180 04:06:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:33.180 04:06:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:33.180 04:06:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:33.180 04:06:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:33.180 04:06:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:33.180 04:06:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:33.180 04:06:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:33.180 04:06:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:33.180 04:06:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:33.180 04:06:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:33.180 04:06:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:22:33.180 04:06:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:22:33.180 04:06:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:33.180 04:06:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:33.180 04:06:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:33.180 04:06:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:33.180 04:06:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:33.180 04:06:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:33.180 04:06:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:33.180 04:06:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.180 04:06:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.180 04:06:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.180 04:06:08 -- paths/export.sh@5 -- # export PATH 00:22:33.180 04:06:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.180 04:06:08 -- nvmf/common.sh@46 -- # : 0 00:22:33.180 04:06:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:33.180 04:06:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:33.180 04:06:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:33.180 04:06:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:33.180 04:06:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:33.180 04:06:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:33.180 04:06:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:33.180 04:06:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:33.180 04:06:08 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:33.180 04:06:08 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:33.180 04:06:08 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:33.180 04:06:08 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:33.180 04:06:08 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:33.180 04:06:08 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:33.180 04:06:08 -- host/discovery.sh@25 -- # nvmftestinit 00:22:33.180 04:06:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:33.180 04:06:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:33.180 04:06:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:33.180 04:06:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:33.180 04:06:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:33.180 04:06:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.180 04:06:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:33.180 04:06:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.180 04:06:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:33.180 04:06:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:33.180 04:06:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:33.180 04:06:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:33.180 04:06:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:33.180 04:06:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:33.180 04:06:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:33.180 04:06:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:33.180 04:06:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:33.180 04:06:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:33.180 04:06:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:33.180 04:06:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:33.180 04:06:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:33.180 04:06:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:33.180 04:06:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:33.180 04:06:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:33.180 04:06:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:33.180 04:06:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:33.180 04:06:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:33.180 04:06:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:33.180 Cannot find device "nvmf_tgt_br" 00:22:33.180 04:06:08 -- nvmf/common.sh@154 -- # true 00:22:33.180 04:06:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:33.180 Cannot find device "nvmf_tgt_br2" 00:22:33.180 04:06:08 -- nvmf/common.sh@155 -- # true 00:22:33.180 04:06:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:33.180 04:06:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:33.180 Cannot find device "nvmf_tgt_br" 00:22:33.180 04:06:08 -- nvmf/common.sh@157 -- # true 00:22:33.180 04:06:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:33.180 Cannot find device "nvmf_tgt_br2" 00:22:33.180 04:06:08 -- nvmf/common.sh@158 -- # true 00:22:33.180 04:06:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:33.180 04:06:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:33.180 04:06:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:33.180 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:33.180 04:06:08 -- nvmf/common.sh@161 -- # true 00:22:33.181 04:06:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:33.181 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:33.181 04:06:08 -- nvmf/common.sh@162 -- # true 00:22:33.181 04:06:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:33.181 04:06:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:33.181 04:06:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:33.181 04:06:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:33.181 04:06:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:33.181 04:06:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:33.181 04:06:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:33.181 04:06:08 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:33.181 04:06:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:33.181 04:06:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:33.181 04:06:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:33.181 04:06:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:33.181 04:06:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:33.181 04:06:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:33.181 04:06:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:33.181 04:06:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:33.181 04:06:08 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:33.439 04:06:08 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:33.439 04:06:08 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:33.439 04:06:08 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:33.439 04:06:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:33.439 04:06:08 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:33.439 04:06:08 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:33.439 04:06:08 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:33.439 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:33.439 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:22:33.439 00:22:33.439 --- 10.0.0.2 ping statistics --- 00:22:33.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.440 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:22:33.440 04:06:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:33.440 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:33.440 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:22:33.440 00:22:33.440 --- 10.0.0.3 ping statistics --- 00:22:33.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.440 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:22:33.440 04:06:08 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:33.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:33.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:22:33.440 00:22:33.440 --- 10.0.0.1 ping statistics --- 00:22:33.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.440 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:22:33.440 04:06:08 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:33.440 04:06:08 -- nvmf/common.sh@421 -- # return 0 00:22:33.440 04:06:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:33.440 04:06:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:33.440 04:06:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:33.440 04:06:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:33.440 04:06:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:33.440 04:06:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:33.440 04:06:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:33.440 04:06:08 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:33.440 04:06:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:33.440 04:06:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:33.440 04:06:08 -- common/autotest_common.sh@10 -- # set +x 00:22:33.440 04:06:08 -- nvmf/common.sh@469 -- # nvmfpid=85674 00:22:33.440 04:06:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:33.440 04:06:08 -- nvmf/common.sh@470 -- # waitforlisten 85674 00:22:33.440 04:06:08 -- common/autotest_common.sh@829 -- # '[' -z 85674 ']' 00:22:33.440 04:06:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.440 04:06:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:33.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.440 04:06:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.440 04:06:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:33.440 04:06:08 -- common/autotest_common.sh@10 -- # set +x 00:22:33.440 [2024-11-08 04:06:08.451211] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:33.440 [2024-11-08 04:06:08.451304] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:33.698 [2024-11-08 04:06:08.591832] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.698 [2024-11-08 04:06:08.685293] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:33.698 [2024-11-08 04:06:08.685505] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:33.698 [2024-11-08 04:06:08.685524] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:33.698 [2024-11-08 04:06:08.685534] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:33.698 [2024-11-08 04:06:08.685571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.634 04:06:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:34.634 04:06:09 -- common/autotest_common.sh@862 -- # return 0 00:22:34.634 04:06:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:34.634 04:06:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:34.634 04:06:09 -- common/autotest_common.sh@10 -- # set +x 00:22:34.634 04:06:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:34.634 04:06:09 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:34.634 04:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.634 04:06:09 -- common/autotest_common.sh@10 -- # set +x 00:22:34.634 [2024-11-08 04:06:09.515887] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:34.634 04:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.634 04:06:09 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:34.634 04:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.634 04:06:09 -- common/autotest_common.sh@10 -- # set +x 00:22:34.634 [2024-11-08 04:06:09.524018] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:34.634 04:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.634 04:06:09 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:34.634 04:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.634 04:06:09 -- common/autotest_common.sh@10 -- # set +x 00:22:34.634 null0 00:22:34.634 04:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.634 04:06:09 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:34.634 04:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.634 04:06:09 -- common/autotest_common.sh@10 -- # set +x 00:22:34.634 null1 00:22:34.634 04:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.634 04:06:09 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:34.634 04:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.634 04:06:09 -- common/autotest_common.sh@10 -- # set +x 00:22:34.634 04:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.634 04:06:09 -- host/discovery.sh@45 -- # hostpid=85729 00:22:34.634 04:06:09 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:34.634 04:06:09 -- host/discovery.sh@46 -- # waitforlisten 85729 /tmp/host.sock 00:22:34.634 04:06:09 -- common/autotest_common.sh@829 -- # '[' -z 85729 ']' 00:22:34.634 04:06:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:34.634 04:06:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:34.634 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:34.634 04:06:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:34.634 04:06:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:34.634 04:06:09 -- common/autotest_common.sh@10 -- # set +x 00:22:34.634 [2024-11-08 04:06:09.609983] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:34.634 [2024-11-08 04:06:09.610077] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85729 ] 00:22:34.892 [2024-11-08 04:06:09.747132] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.892 [2024-11-08 04:06:09.836831] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:34.892 [2024-11-08 04:06:09.836972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.827 04:06:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:35.827 04:06:10 -- common/autotest_common.sh@862 -- # return 0 00:22:35.827 04:06:10 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:35.827 04:06:10 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:35.827 04:06:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.827 04:06:10 -- common/autotest_common.sh@10 -- # set +x 00:22:35.827 04:06:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.827 04:06:10 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:35.827 04:06:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.828 04:06:10 -- common/autotest_common.sh@10 -- # set +x 00:22:35.828 04:06:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.828 04:06:10 -- host/discovery.sh@72 -- # notify_id=0 00:22:35.828 04:06:10 -- host/discovery.sh@78 -- # get_subsystem_names 00:22:35.828 04:06:10 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:35.828 04:06:10 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:35.828 04:06:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.828 04:06:10 -- host/discovery.sh@59 -- # sort 00:22:35.828 04:06:10 -- common/autotest_common.sh@10 -- # set +x 00:22:35.828 04:06:10 -- host/discovery.sh@59 -- # xargs 00:22:35.828 04:06:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.828 04:06:10 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:22:35.828 04:06:10 -- host/discovery.sh@79 -- # get_bdev_list 00:22:35.828 04:06:10 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:35.828 04:06:10 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:35.828 04:06:10 -- host/discovery.sh@55 -- # sort 00:22:35.828 04:06:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.828 04:06:10 -- host/discovery.sh@55 -- # xargs 00:22:35.828 04:06:10 -- common/autotest_common.sh@10 -- # set +x 00:22:35.828 04:06:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.828 04:06:10 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:22:35.828 04:06:10 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:35.828 04:06:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.828 04:06:10 -- common/autotest_common.sh@10 -- # set +x 00:22:35.828 04:06:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.828 04:06:10 -- host/discovery.sh@82 -- # get_subsystem_names 00:22:35.828 04:06:10 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:35.828 04:06:10 -- host/discovery.sh@59 -- # sort 00:22:35.828 04:06:10 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:35.828 04:06:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.828 04:06:10 -- common/autotest_common.sh@10 -- # set +x 00:22:35.828 04:06:10 -- host/discovery.sh@59 -- # xargs 00:22:35.828 04:06:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.828 04:06:10 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:22:35.828 04:06:10 -- host/discovery.sh@83 -- # get_bdev_list 00:22:35.828 04:06:10 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:35.828 04:06:10 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:35.828 04:06:10 -- host/discovery.sh@55 -- # sort 00:22:35.828 04:06:10 -- host/discovery.sh@55 -- # xargs 00:22:35.828 04:06:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.828 04:06:10 -- common/autotest_common.sh@10 -- # set +x 00:22:35.828 04:06:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.828 04:06:10 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:35.828 04:06:10 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:35.828 04:06:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.828 04:06:10 -- common/autotest_common.sh@10 -- # set +x 00:22:35.828 04:06:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.828 04:06:10 -- host/discovery.sh@86 -- # get_subsystem_names 00:22:35.828 04:06:10 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:35.828 04:06:10 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:35.828 04:06:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.828 04:06:10 -- common/autotest_common.sh@10 -- # set +x 00:22:35.828 04:06:10 -- host/discovery.sh@59 -- # sort 00:22:35.828 04:06:10 -- host/discovery.sh@59 -- # xargs 00:22:35.828 04:06:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.828 04:06:10 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:22:35.828 04:06:10 -- host/discovery.sh@87 -- # get_bdev_list 00:22:35.828 04:06:10 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:35.828 04:06:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.828 04:06:10 -- common/autotest_common.sh@10 -- # set +x 00:22:35.828 04:06:10 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:35.828 04:06:10 -- host/discovery.sh@55 -- # sort 00:22:35.828 04:06:10 -- host/discovery.sh@55 -- # xargs 00:22:35.828 04:06:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.099 04:06:10 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:36.099 04:06:10 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:36.099 04:06:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.099 04:06:10 -- common/autotest_common.sh@10 -- # set +x 00:22:36.099 [2024-11-08 04:06:10.956303] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:36.099 04:06:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.099 04:06:10 -- host/discovery.sh@92 -- # get_subsystem_names 00:22:36.099 04:06:10 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:36.099 04:06:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.099 04:06:10 -- common/autotest_common.sh@10 -- # set +x 00:22:36.099 04:06:10 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:36.099 04:06:10 -- host/discovery.sh@59 -- # sort 00:22:36.099 04:06:10 -- host/discovery.sh@59 -- # xargs 00:22:36.099 04:06:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.099 04:06:11 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:36.099 04:06:11 -- host/discovery.sh@93 -- # get_bdev_list 00:22:36.099 04:06:11 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:36.099 04:06:11 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:36.099 04:06:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.099 04:06:11 -- host/discovery.sh@55 -- # sort 00:22:36.099 04:06:11 -- common/autotest_common.sh@10 -- # set +x 00:22:36.099 04:06:11 -- host/discovery.sh@55 -- # xargs 00:22:36.099 04:06:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.099 04:06:11 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:22:36.099 04:06:11 -- host/discovery.sh@94 -- # get_notification_count 00:22:36.099 04:06:11 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:36.099 04:06:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.099 04:06:11 -- common/autotest_common.sh@10 -- # set +x 00:22:36.099 04:06:11 -- host/discovery.sh@74 -- # jq '. | length' 00:22:36.099 04:06:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.099 04:06:11 -- host/discovery.sh@74 -- # notification_count=0 00:22:36.099 04:06:11 -- host/discovery.sh@75 -- # notify_id=0 00:22:36.099 04:06:11 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:22:36.099 04:06:11 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:36.099 04:06:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.099 04:06:11 -- common/autotest_common.sh@10 -- # set +x 00:22:36.099 04:06:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.099 04:06:11 -- host/discovery.sh@100 -- # sleep 1 00:22:36.687 [2024-11-08 04:06:11.638977] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:36.687 [2024-11-08 04:06:11.639005] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:36.687 [2024-11-08 04:06:11.639021] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:36.687 [2024-11-08 04:06:11.725076] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:36.687 [2024-11-08 04:06:11.780665] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:36.687 [2024-11-08 04:06:11.780689] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:37.254 04:06:12 -- host/discovery.sh@101 -- # get_subsystem_names 00:22:37.254 04:06:12 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:37.254 04:06:12 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:37.254 04:06:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.254 04:06:12 -- common/autotest_common.sh@10 -- # set +x 00:22:37.254 04:06:12 -- host/discovery.sh@59 -- # sort 00:22:37.254 04:06:12 -- host/discovery.sh@59 -- # xargs 00:22:37.254 04:06:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.254 04:06:12 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.254 04:06:12 -- host/discovery.sh@102 -- # get_bdev_list 00:22:37.254 04:06:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:37.254 04:06:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:37.254 04:06:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.254 04:06:12 -- common/autotest_common.sh@10 -- # set +x 00:22:37.254 04:06:12 -- host/discovery.sh@55 -- # sort 00:22:37.254 04:06:12 -- host/discovery.sh@55 -- # xargs 00:22:37.254 04:06:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.254 04:06:12 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:37.254 04:06:12 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:22:37.254 04:06:12 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:37.254 04:06:12 -- host/discovery.sh@63 -- # sort -n 00:22:37.254 04:06:12 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:37.254 04:06:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.254 04:06:12 -- common/autotest_common.sh@10 -- # set +x 00:22:37.254 04:06:12 -- host/discovery.sh@63 -- # xargs 00:22:37.254 04:06:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.254 04:06:12 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:22:37.254 04:06:12 -- host/discovery.sh@104 -- # get_notification_count 00:22:37.254 04:06:12 -- host/discovery.sh@74 -- # jq '. | length' 00:22:37.254 04:06:12 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:37.254 04:06:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.254 04:06:12 -- common/autotest_common.sh@10 -- # set +x 00:22:37.254 04:06:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.254 04:06:12 -- host/discovery.sh@74 -- # notification_count=1 00:22:37.254 04:06:12 -- host/discovery.sh@75 -- # notify_id=1 00:22:37.254 04:06:12 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:22:37.254 04:06:12 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:37.254 04:06:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.254 04:06:12 -- common/autotest_common.sh@10 -- # set +x 00:22:37.254 04:06:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.254 04:06:12 -- host/discovery.sh@109 -- # sleep 1 00:22:38.630 04:06:13 -- host/discovery.sh@110 -- # get_bdev_list 00:22:38.630 04:06:13 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:38.630 04:06:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.630 04:06:13 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:38.630 04:06:13 -- common/autotest_common.sh@10 -- # set +x 00:22:38.630 04:06:13 -- host/discovery.sh@55 -- # sort 00:22:38.630 04:06:13 -- host/discovery.sh@55 -- # xargs 00:22:38.630 04:06:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.630 04:06:13 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:38.630 04:06:13 -- host/discovery.sh@111 -- # get_notification_count 00:22:38.630 04:06:13 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:38.630 04:06:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.630 04:06:13 -- host/discovery.sh@74 -- # jq '. | length' 00:22:38.630 04:06:13 -- common/autotest_common.sh@10 -- # set +x 00:22:38.630 04:06:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.630 04:06:13 -- host/discovery.sh@74 -- # notification_count=1 00:22:38.630 04:06:13 -- host/discovery.sh@75 -- # notify_id=2 00:22:38.630 04:06:13 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:22:38.630 04:06:13 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:38.630 04:06:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.630 04:06:13 -- common/autotest_common.sh@10 -- # set +x 00:22:38.630 [2024-11-08 04:06:13.477713] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:38.630 [2024-11-08 04:06:13.478409] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:38.630 [2024-11-08 04:06:13.478454] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:38.630 04:06:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.630 04:06:13 -- host/discovery.sh@117 -- # sleep 1 00:22:38.630 [2024-11-08 04:06:13.564471] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:38.631 [2024-11-08 04:06:13.621641] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:38.631 [2024-11-08 04:06:13.621662] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:38.631 [2024-11-08 04:06:13.621668] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:39.566 04:06:14 -- host/discovery.sh@118 -- # get_subsystem_names 00:22:39.566 04:06:14 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:39.566 04:06:14 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:39.566 04:06:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.566 04:06:14 -- common/autotest_common.sh@10 -- # set +x 00:22:39.566 04:06:14 -- host/discovery.sh@59 -- # sort 00:22:39.566 04:06:14 -- host/discovery.sh@59 -- # xargs 00:22:39.566 04:06:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.566 04:06:14 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.566 04:06:14 -- host/discovery.sh@119 -- # get_bdev_list 00:22:39.566 04:06:14 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:39.566 04:06:14 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:39.567 04:06:14 -- host/discovery.sh@55 -- # sort 00:22:39.567 04:06:14 -- host/discovery.sh@55 -- # xargs 00:22:39.567 04:06:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.567 04:06:14 -- common/autotest_common.sh@10 -- # set +x 00:22:39.567 04:06:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.567 04:06:14 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:39.567 04:06:14 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:22:39.567 04:06:14 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:39.567 04:06:14 -- host/discovery.sh@63 -- # sort -n 00:22:39.567 04:06:14 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:39.567 04:06:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.567 04:06:14 -- common/autotest_common.sh@10 -- # set +x 00:22:39.567 04:06:14 -- host/discovery.sh@63 -- # xargs 00:22:39.567 04:06:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.567 04:06:14 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:39.567 04:06:14 -- host/discovery.sh@121 -- # get_notification_count 00:22:39.567 04:06:14 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:39.567 04:06:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.567 04:06:14 -- common/autotest_common.sh@10 -- # set +x 00:22:39.567 04:06:14 -- host/discovery.sh@74 -- # jq '. | length' 00:22:39.567 04:06:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.826 04:06:14 -- host/discovery.sh@74 -- # notification_count=0 00:22:39.826 04:06:14 -- host/discovery.sh@75 -- # notify_id=2 00:22:39.826 04:06:14 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:22:39.826 04:06:14 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:39.826 04:06:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.826 04:06:14 -- common/autotest_common.sh@10 -- # set +x 00:22:39.826 [2024-11-08 04:06:14.706422] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:39.826 [2024-11-08 04:06:14.706468] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:39.826 [2024-11-08 04:06:14.708589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.826 [2024-11-08 04:06:14.708619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.826 [2024-11-08 04:06:14.708630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.826 [2024-11-08 04:06:14.708638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.826 [2024-11-08 04:06:14.708646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.826 [2024-11-08 04:06:14.708654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.826 [2024-11-08 04:06:14.708662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.826 [2024-11-08 04:06:14.708670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.826 [2024-11-08 04:06:14.708678] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6639c0 is same with the state(5) to be set 00:22:39.826 04:06:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.826 04:06:14 -- host/discovery.sh@127 -- # sleep 1 00:22:39.826 [2024-11-08 04:06:14.718558] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6639c0 (9): Bad file descriptor 00:22:39.826 [2024-11-08 04:06:14.728575] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:39.826 [2024-11-08 04:06:14.728667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:39.826 [2024-11-08 04:06:14.728711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:39.826 [2024-11-08 04:06:14.728726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6639c0 with addr=10.0.0.2, port=4420 00:22:39.826 [2024-11-08 04:06:14.728736] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6639c0 is same with the state(5) to be set 00:22:39.826 [2024-11-08 04:06:14.728750] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6639c0 (9): Bad file descriptor 00:22:39.826 [2024-11-08 04:06:14.728763] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:39.826 [2024-11-08 04:06:14.728770] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:39.826 [2024-11-08 04:06:14.728779] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:39.826 [2024-11-08 04:06:14.728792] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:39.826 [2024-11-08 04:06:14.738621] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:39.826 [2024-11-08 04:06:14.738690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:39.826 [2024-11-08 04:06:14.738730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:39.826 [2024-11-08 04:06:14.738744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6639c0 with addr=10.0.0.2, port=4420 00:22:39.826 [2024-11-08 04:06:14.738755] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6639c0 is same with the state(5) to be set 00:22:39.826 [2024-11-08 04:06:14.738768] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6639c0 (9): Bad file descriptor 00:22:39.826 [2024-11-08 04:06:14.738781] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:39.826 [2024-11-08 04:06:14.738788] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:39.826 [2024-11-08 04:06:14.738796] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:39.826 [2024-11-08 04:06:14.738808] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:39.826 [2024-11-08 04:06:14.748664] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:39.826 [2024-11-08 04:06:14.748741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:39.826 [2024-11-08 04:06:14.748783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:39.826 [2024-11-08 04:06:14.748798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6639c0 with addr=10.0.0.2, port=4420 00:22:39.827 [2024-11-08 04:06:14.748807] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6639c0 is same with the state(5) to be set 00:22:39.827 [2024-11-08 04:06:14.748821] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6639c0 (9): Bad file descriptor 00:22:39.827 [2024-11-08 04:06:14.748834] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:39.827 [2024-11-08 04:06:14.748841] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:39.827 [2024-11-08 04:06:14.748849] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:39.827 [2024-11-08 04:06:14.748861] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:39.827 [2024-11-08 04:06:14.758712] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:39.827 [2024-11-08 04:06:14.758788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:39.827 [2024-11-08 04:06:14.758829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:39.827 [2024-11-08 04:06:14.758844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6639c0 with addr=10.0.0.2, port=4420 00:22:39.827 [2024-11-08 04:06:14.758853] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6639c0 is same with the state(5) to be set 00:22:39.827 [2024-11-08 04:06:14.758868] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6639c0 (9): Bad file descriptor 00:22:39.827 [2024-11-08 04:06:14.758888] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:39.827 [2024-11-08 04:06:14.758898] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:39.827 [2024-11-08 04:06:14.758905] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:39.827 [2024-11-08 04:06:14.758918] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:39.827 [2024-11-08 04:06:14.768758] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:39.827 [2024-11-08 04:06:14.768823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:39.827 [2024-11-08 04:06:14.768862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:39.827 [2024-11-08 04:06:14.768876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6639c0 with addr=10.0.0.2, port=4420 00:22:39.827 [2024-11-08 04:06:14.768885] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6639c0 is same with the state(5) to be set 00:22:39.827 [2024-11-08 04:06:14.768899] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6639c0 (9): Bad file descriptor 00:22:39.827 [2024-11-08 04:06:14.768911] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:39.827 [2024-11-08 04:06:14.768919] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:39.827 [2024-11-08 04:06:14.768926] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:39.827 [2024-11-08 04:06:14.768939] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:39.827 [2024-11-08 04:06:14.778799] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:39.827 [2024-11-08 04:06:14.778864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:39.827 [2024-11-08 04:06:14.778903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:39.827 [2024-11-08 04:06:14.778917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6639c0 with addr=10.0.0.2, port=4420 00:22:39.827 [2024-11-08 04:06:14.778927] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6639c0 is same with the state(5) to be set 00:22:39.827 [2024-11-08 04:06:14.778940] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6639c0 (9): Bad file descriptor 00:22:39.827 [2024-11-08 04:06:14.778959] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:39.827 [2024-11-08 04:06:14.778968] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:39.827 [2024-11-08 04:06:14.778976] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:39.827 [2024-11-08 04:06:14.778987] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:39.827 [2024-11-08 04:06:14.788838] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:39.827 [2024-11-08 04:06:14.788903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:39.827 [2024-11-08 04:06:14.788942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:39.827 [2024-11-08 04:06:14.788956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6639c0 with addr=10.0.0.2, port=4420 00:22:39.827 [2024-11-08 04:06:14.788965] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6639c0 is same with the state(5) to be set 00:22:39.827 [2024-11-08 04:06:14.788979] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6639c0 (9): Bad file descriptor 00:22:39.827 [2024-11-08 04:06:14.788991] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:39.827 [2024-11-08 04:06:14.788998] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:39.827 [2024-11-08 04:06:14.789005] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:39.827 [2024-11-08 04:06:14.789017] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:39.827 [2024-11-08 04:06:14.792502] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:39.827 [2024-11-08 04:06:14.792533] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:40.763 04:06:15 -- host/discovery.sh@128 -- # get_subsystem_names 00:22:40.763 04:06:15 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:40.763 04:06:15 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:40.763 04:06:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.763 04:06:15 -- common/autotest_common.sh@10 -- # set +x 00:22:40.763 04:06:15 -- host/discovery.sh@59 -- # sort 00:22:40.763 04:06:15 -- host/discovery.sh@59 -- # xargs 00:22:40.763 04:06:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.763 04:06:15 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.763 04:06:15 -- host/discovery.sh@129 -- # get_bdev_list 00:22:40.763 04:06:15 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:40.763 04:06:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.763 04:06:15 -- common/autotest_common.sh@10 -- # set +x 00:22:40.763 04:06:15 -- host/discovery.sh@55 -- # sort 00:22:40.763 04:06:15 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:40.763 04:06:15 -- host/discovery.sh@55 -- # xargs 00:22:40.763 04:06:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.763 04:06:15 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:40.763 04:06:15 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:22:40.763 04:06:15 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:40.763 04:06:15 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:40.763 04:06:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.763 04:06:15 -- common/autotest_common.sh@10 -- # set +x 00:22:40.763 04:06:15 -- host/discovery.sh@63 -- # xargs 00:22:40.763 04:06:15 -- host/discovery.sh@63 -- # sort -n 00:22:40.763 04:06:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.022 04:06:15 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:22:41.022 04:06:15 -- host/discovery.sh@131 -- # get_notification_count 00:22:41.022 04:06:15 -- host/discovery.sh@74 -- # jq '. | length' 00:22:41.022 04:06:15 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:41.022 04:06:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.022 04:06:15 -- common/autotest_common.sh@10 -- # set +x 00:22:41.022 04:06:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.022 04:06:15 -- host/discovery.sh@74 -- # notification_count=0 00:22:41.022 04:06:15 -- host/discovery.sh@75 -- # notify_id=2 00:22:41.022 04:06:15 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:22:41.022 04:06:15 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:41.022 04:06:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.022 04:06:15 -- common/autotest_common.sh@10 -- # set +x 00:22:41.022 04:06:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.022 04:06:15 -- host/discovery.sh@135 -- # sleep 1 00:22:41.958 04:06:16 -- host/discovery.sh@136 -- # get_subsystem_names 00:22:41.958 04:06:16 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:41.958 04:06:16 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:41.958 04:06:16 -- host/discovery.sh@59 -- # sort 00:22:41.958 04:06:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.958 04:06:16 -- common/autotest_common.sh@10 -- # set +x 00:22:41.958 04:06:16 -- host/discovery.sh@59 -- # xargs 00:22:41.958 04:06:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.958 04:06:17 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:22:41.958 04:06:17 -- host/discovery.sh@137 -- # get_bdev_list 00:22:41.958 04:06:17 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:41.958 04:06:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.958 04:06:17 -- common/autotest_common.sh@10 -- # set +x 00:22:41.958 04:06:17 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:41.958 04:06:17 -- host/discovery.sh@55 -- # sort 00:22:41.958 04:06:17 -- host/discovery.sh@55 -- # xargs 00:22:41.958 04:06:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.958 04:06:17 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:22:41.958 04:06:17 -- host/discovery.sh@138 -- # get_notification_count 00:22:42.216 04:06:17 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:42.216 04:06:17 -- host/discovery.sh@74 -- # jq '. | length' 00:22:42.217 04:06:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.217 04:06:17 -- common/autotest_common.sh@10 -- # set +x 00:22:42.217 04:06:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.217 04:06:17 -- host/discovery.sh@74 -- # notification_count=2 00:22:42.217 04:06:17 -- host/discovery.sh@75 -- # notify_id=4 00:22:42.217 04:06:17 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:22:42.217 04:06:17 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:42.217 04:06:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.217 04:06:17 -- common/autotest_common.sh@10 -- # set +x 00:22:43.152 [2024-11-08 04:06:18.129499] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:43.152 [2024-11-08 04:06:18.129518] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:43.152 [2024-11-08 04:06:18.129532] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:43.152 [2024-11-08 04:06:18.215876] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:43.411 [2024-11-08 04:06:18.274625] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:43.411 [2024-11-08 04:06:18.274786] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:43.411 04:06:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.411 04:06:18 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:43.411 04:06:18 -- common/autotest_common.sh@650 -- # local es=0 00:22:43.411 04:06:18 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:43.411 04:06:18 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:43.411 04:06:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:43.411 04:06:18 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:43.411 04:06:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:43.411 04:06:18 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:43.411 04:06:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.411 04:06:18 -- common/autotest_common.sh@10 -- # set +x 00:22:43.411 2024/11/08 04:06:18 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:22:43.411 request: 00:22:43.411 { 00:22:43.411 "method": "bdev_nvme_start_discovery", 00:22:43.411 "params": { 00:22:43.411 "name": "nvme", 00:22:43.411 "trtype": "tcp", 00:22:43.411 "traddr": "10.0.0.2", 00:22:43.411 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:43.411 "adrfam": "ipv4", 00:22:43.411 "trsvcid": "8009", 00:22:43.411 "wait_for_attach": true 00:22:43.411 } 00:22:43.411 } 00:22:43.411 Got JSON-RPC error response 00:22:43.411 GoRPCClient: error on JSON-RPC call 00:22:43.411 04:06:18 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:43.411 04:06:18 -- common/autotest_common.sh@653 -- # es=1 00:22:43.411 04:06:18 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:43.411 04:06:18 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:43.411 04:06:18 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:43.411 04:06:18 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:22:43.411 04:06:18 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:43.411 04:06:18 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:43.411 04:06:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.411 04:06:18 -- host/discovery.sh@67 -- # sort 00:22:43.411 04:06:18 -- common/autotest_common.sh@10 -- # set +x 00:22:43.411 04:06:18 -- host/discovery.sh@67 -- # xargs 00:22:43.411 04:06:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.411 04:06:18 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:22:43.411 04:06:18 -- host/discovery.sh@147 -- # get_bdev_list 00:22:43.411 04:06:18 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:43.411 04:06:18 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:43.411 04:06:18 -- host/discovery.sh@55 -- # sort 00:22:43.411 04:06:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.411 04:06:18 -- common/autotest_common.sh@10 -- # set +x 00:22:43.411 04:06:18 -- host/discovery.sh@55 -- # xargs 00:22:43.411 04:06:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.411 04:06:18 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:43.411 04:06:18 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:43.411 04:06:18 -- common/autotest_common.sh@650 -- # local es=0 00:22:43.411 04:06:18 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:43.411 04:06:18 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:43.411 04:06:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:43.411 04:06:18 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:43.411 04:06:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:43.411 04:06:18 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:43.411 04:06:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.411 04:06:18 -- common/autotest_common.sh@10 -- # set +x 00:22:43.411 2024/11/08 04:06:18 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:22:43.411 request: 00:22:43.411 { 00:22:43.411 "method": "bdev_nvme_start_discovery", 00:22:43.411 "params": { 00:22:43.411 "name": "nvme_second", 00:22:43.411 "trtype": "tcp", 00:22:43.411 "traddr": "10.0.0.2", 00:22:43.411 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:43.411 "adrfam": "ipv4", 00:22:43.411 "trsvcid": "8009", 00:22:43.411 "wait_for_attach": true 00:22:43.411 } 00:22:43.411 } 00:22:43.411 Got JSON-RPC error response 00:22:43.411 GoRPCClient: error on JSON-RPC call 00:22:43.411 04:06:18 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:43.411 04:06:18 -- common/autotest_common.sh@653 -- # es=1 00:22:43.411 04:06:18 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:43.411 04:06:18 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:43.411 04:06:18 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:43.411 04:06:18 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:22:43.411 04:06:18 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:43.411 04:06:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.411 04:06:18 -- common/autotest_common.sh@10 -- # set +x 00:22:43.411 04:06:18 -- host/discovery.sh@67 -- # sort 00:22:43.411 04:06:18 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:43.411 04:06:18 -- host/discovery.sh@67 -- # xargs 00:22:43.411 04:06:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.411 04:06:18 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:22:43.411 04:06:18 -- host/discovery.sh@153 -- # get_bdev_list 00:22:43.411 04:06:18 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:43.411 04:06:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.411 04:06:18 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:43.412 04:06:18 -- host/discovery.sh@55 -- # sort 00:22:43.412 04:06:18 -- common/autotest_common.sh@10 -- # set +x 00:22:43.412 04:06:18 -- host/discovery.sh@55 -- # xargs 00:22:43.412 04:06:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.671 04:06:18 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:43.671 04:06:18 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:43.671 04:06:18 -- common/autotest_common.sh@650 -- # local es=0 00:22:43.671 04:06:18 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:43.671 04:06:18 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:43.671 04:06:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:43.671 04:06:18 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:43.671 04:06:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:43.671 04:06:18 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:43.671 04:06:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.671 04:06:18 -- common/autotest_common.sh@10 -- # set +x 00:22:44.605 [2024-11-08 04:06:19.540737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:44.605 [2024-11-08 04:06:19.540801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:44.605 [2024-11-08 04:06:19.540818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65f970 with addr=10.0.0.2, port=8010 00:22:44.605 [2024-11-08 04:06:19.540830] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:44.605 [2024-11-08 04:06:19.540838] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:44.605 [2024-11-08 04:06:19.540846] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:45.539 [2024-11-08 04:06:20.540761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.539 [2024-11-08 04:06:20.540836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.539 [2024-11-08 04:06:20.540854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65f970 with addr=10.0.0.2, port=8010 00:22:45.539 [2024-11-08 04:06:20.540873] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:45.539 [2024-11-08 04:06:20.540881] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:45.539 [2024-11-08 04:06:20.540889] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:46.474 [2024-11-08 04:06:21.540665] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:46.474 2024/11/08 04:06:21 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:22:46.474 request: 00:22:46.474 { 00:22:46.474 "method": "bdev_nvme_start_discovery", 00:22:46.474 "params": { 00:22:46.474 "name": "nvme_second", 00:22:46.474 "trtype": "tcp", 00:22:46.474 "traddr": "10.0.0.2", 00:22:46.474 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:46.474 "adrfam": "ipv4", 00:22:46.474 "trsvcid": "8010", 00:22:46.474 "attach_timeout_ms": 3000 00:22:46.474 } 00:22:46.474 } 00:22:46.474 Got JSON-RPC error response 00:22:46.474 GoRPCClient: error on JSON-RPC call 00:22:46.474 04:06:21 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:46.474 04:06:21 -- common/autotest_common.sh@653 -- # es=1 00:22:46.474 04:06:21 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:46.474 04:06:21 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:46.474 04:06:21 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:46.474 04:06:21 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:22:46.474 04:06:21 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:46.474 04:06:21 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:46.474 04:06:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.474 04:06:21 -- common/autotest_common.sh@10 -- # set +x 00:22:46.474 04:06:21 -- host/discovery.sh@67 -- # sort 00:22:46.474 04:06:21 -- host/discovery.sh@67 -- # xargs 00:22:46.474 04:06:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.733 04:06:21 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:22:46.733 04:06:21 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:22:46.733 04:06:21 -- host/discovery.sh@162 -- # kill 85729 00:22:46.733 04:06:21 -- host/discovery.sh@163 -- # nvmftestfini 00:22:46.733 04:06:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:46.733 04:06:21 -- nvmf/common.sh@116 -- # sync 00:22:46.733 04:06:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:46.733 04:06:21 -- nvmf/common.sh@119 -- # set +e 00:22:46.733 04:06:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:46.733 04:06:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:46.733 rmmod nvme_tcp 00:22:46.733 rmmod nvme_fabrics 00:22:46.733 rmmod nvme_keyring 00:22:46.733 04:06:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:46.733 04:06:21 -- nvmf/common.sh@123 -- # set -e 00:22:46.733 04:06:21 -- nvmf/common.sh@124 -- # return 0 00:22:46.733 04:06:21 -- nvmf/common.sh@477 -- # '[' -n 85674 ']' 00:22:46.733 04:06:21 -- nvmf/common.sh@478 -- # killprocess 85674 00:22:46.734 04:06:21 -- common/autotest_common.sh@936 -- # '[' -z 85674 ']' 00:22:46.734 04:06:21 -- common/autotest_common.sh@940 -- # kill -0 85674 00:22:46.734 04:06:21 -- common/autotest_common.sh@941 -- # uname 00:22:46.734 04:06:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:46.734 04:06:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85674 00:22:46.734 04:06:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:46.734 04:06:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:46.734 04:06:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85674' 00:22:46.734 killing process with pid 85674 00:22:46.734 04:06:21 -- common/autotest_common.sh@955 -- # kill 85674 00:22:46.734 04:06:21 -- common/autotest_common.sh@960 -- # wait 85674 00:22:46.992 04:06:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:46.992 04:06:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:46.992 04:06:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:46.992 04:06:21 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:46.992 04:06:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:46.992 04:06:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.992 04:06:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:46.992 04:06:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.992 04:06:22 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:46.992 00:22:46.992 real 0m14.171s 00:22:46.992 user 0m27.772s 00:22:46.992 sys 0m1.689s 00:22:46.992 04:06:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:46.992 ************************************ 00:22:46.992 04:06:22 -- common/autotest_common.sh@10 -- # set +x 00:22:46.992 END TEST nvmf_discovery 00:22:46.992 ************************************ 00:22:46.992 04:06:22 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:46.992 04:06:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:46.992 04:06:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:46.992 04:06:22 -- common/autotest_common.sh@10 -- # set +x 00:22:46.992 ************************************ 00:22:46.992 START TEST nvmf_discovery_remove_ifc 00:22:46.992 ************************************ 00:22:46.992 04:06:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:47.252 * Looking for test storage... 00:22:47.252 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:47.252 04:06:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:47.252 04:06:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:47.252 04:06:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:47.252 04:06:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:47.252 04:06:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:47.252 04:06:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:47.252 04:06:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:47.252 04:06:22 -- scripts/common.sh@335 -- # IFS=.-: 00:22:47.252 04:06:22 -- scripts/common.sh@335 -- # read -ra ver1 00:22:47.252 04:06:22 -- scripts/common.sh@336 -- # IFS=.-: 00:22:47.252 04:06:22 -- scripts/common.sh@336 -- # read -ra ver2 00:22:47.252 04:06:22 -- scripts/common.sh@337 -- # local 'op=<' 00:22:47.252 04:06:22 -- scripts/common.sh@339 -- # ver1_l=2 00:22:47.252 04:06:22 -- scripts/common.sh@340 -- # ver2_l=1 00:22:47.252 04:06:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:47.252 04:06:22 -- scripts/common.sh@343 -- # case "$op" in 00:22:47.252 04:06:22 -- scripts/common.sh@344 -- # : 1 00:22:47.252 04:06:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:47.252 04:06:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:47.252 04:06:22 -- scripts/common.sh@364 -- # decimal 1 00:22:47.252 04:06:22 -- scripts/common.sh@352 -- # local d=1 00:22:47.252 04:06:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:47.252 04:06:22 -- scripts/common.sh@354 -- # echo 1 00:22:47.252 04:06:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:47.252 04:06:22 -- scripts/common.sh@365 -- # decimal 2 00:22:47.252 04:06:22 -- scripts/common.sh@352 -- # local d=2 00:22:47.252 04:06:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:47.252 04:06:22 -- scripts/common.sh@354 -- # echo 2 00:22:47.252 04:06:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:47.252 04:06:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:47.252 04:06:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:47.252 04:06:22 -- scripts/common.sh@367 -- # return 0 00:22:47.252 04:06:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:47.252 04:06:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:47.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.252 --rc genhtml_branch_coverage=1 00:22:47.252 --rc genhtml_function_coverage=1 00:22:47.252 --rc genhtml_legend=1 00:22:47.252 --rc geninfo_all_blocks=1 00:22:47.252 --rc geninfo_unexecuted_blocks=1 00:22:47.252 00:22:47.252 ' 00:22:47.252 04:06:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:47.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.252 --rc genhtml_branch_coverage=1 00:22:47.252 --rc genhtml_function_coverage=1 00:22:47.252 --rc genhtml_legend=1 00:22:47.252 --rc geninfo_all_blocks=1 00:22:47.252 --rc geninfo_unexecuted_blocks=1 00:22:47.252 00:22:47.252 ' 00:22:47.252 04:06:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:47.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.252 --rc genhtml_branch_coverage=1 00:22:47.252 --rc genhtml_function_coverage=1 00:22:47.252 --rc genhtml_legend=1 00:22:47.252 --rc geninfo_all_blocks=1 00:22:47.252 --rc geninfo_unexecuted_blocks=1 00:22:47.252 00:22:47.252 ' 00:22:47.252 04:06:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:47.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.252 --rc genhtml_branch_coverage=1 00:22:47.252 --rc genhtml_function_coverage=1 00:22:47.252 --rc genhtml_legend=1 00:22:47.252 --rc geninfo_all_blocks=1 00:22:47.252 --rc geninfo_unexecuted_blocks=1 00:22:47.252 00:22:47.252 ' 00:22:47.252 04:06:22 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:47.252 04:06:22 -- nvmf/common.sh@7 -- # uname -s 00:22:47.252 04:06:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:47.252 04:06:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:47.252 04:06:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:47.252 04:06:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:47.252 04:06:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:47.252 04:06:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:47.252 04:06:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:47.252 04:06:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:47.252 04:06:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:47.252 04:06:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:47.252 04:06:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:22:47.252 04:06:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:22:47.252 04:06:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:47.252 04:06:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:47.252 04:06:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:47.252 04:06:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:47.252 04:06:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:47.252 04:06:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:47.252 04:06:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:47.252 04:06:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.252 04:06:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.252 04:06:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.252 04:06:22 -- paths/export.sh@5 -- # export PATH 00:22:47.252 04:06:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.252 04:06:22 -- nvmf/common.sh@46 -- # : 0 00:22:47.252 04:06:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:47.252 04:06:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:47.252 04:06:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:47.252 04:06:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:47.252 04:06:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:47.252 04:06:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:47.252 04:06:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:47.252 04:06:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:47.252 04:06:22 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:47.252 04:06:22 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:47.252 04:06:22 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:47.252 04:06:22 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:47.252 04:06:22 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:47.252 04:06:22 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:47.252 04:06:22 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:47.252 04:06:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:47.252 04:06:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:47.252 04:06:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:47.252 04:06:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:47.252 04:06:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:47.252 04:06:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.252 04:06:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:47.252 04:06:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.252 04:06:22 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:47.252 04:06:22 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:47.252 04:06:22 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:47.252 04:06:22 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:47.252 04:06:22 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:47.252 04:06:22 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:47.252 04:06:22 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:47.252 04:06:22 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:47.252 04:06:22 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:47.252 04:06:22 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:47.253 04:06:22 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:47.253 04:06:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:47.253 04:06:22 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:47.253 04:06:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:47.253 04:06:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:47.253 04:06:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:47.253 04:06:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:47.253 04:06:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:47.253 04:06:22 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:47.253 04:06:22 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:47.253 Cannot find device "nvmf_tgt_br" 00:22:47.253 04:06:22 -- nvmf/common.sh@154 -- # true 00:22:47.253 04:06:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:47.253 Cannot find device "nvmf_tgt_br2" 00:22:47.253 04:06:22 -- nvmf/common.sh@155 -- # true 00:22:47.253 04:06:22 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:47.253 04:06:22 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:47.511 Cannot find device "nvmf_tgt_br" 00:22:47.511 04:06:22 -- nvmf/common.sh@157 -- # true 00:22:47.511 04:06:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:47.511 Cannot find device "nvmf_tgt_br2" 00:22:47.511 04:06:22 -- nvmf/common.sh@158 -- # true 00:22:47.511 04:06:22 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:47.511 04:06:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:47.511 04:06:22 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:47.511 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:47.511 04:06:22 -- nvmf/common.sh@161 -- # true 00:22:47.511 04:06:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:47.511 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:47.511 04:06:22 -- nvmf/common.sh@162 -- # true 00:22:47.511 04:06:22 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:47.511 04:06:22 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:47.511 04:06:22 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:47.511 04:06:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:47.511 04:06:22 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:47.511 04:06:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:47.511 04:06:22 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:47.511 04:06:22 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:47.511 04:06:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:47.511 04:06:22 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:47.511 04:06:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:47.511 04:06:22 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:47.511 04:06:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:47.511 04:06:22 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:47.511 04:06:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:47.511 04:06:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:47.511 04:06:22 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:47.511 04:06:22 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:47.511 04:06:22 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:47.511 04:06:22 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:47.511 04:06:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:47.771 04:06:22 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:47.771 04:06:22 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:47.771 04:06:22 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:47.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:47.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:22:47.771 00:22:47.771 --- 10.0.0.2 ping statistics --- 00:22:47.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.771 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:22:47.771 04:06:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:47.771 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:47.771 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:22:47.771 00:22:47.771 --- 10.0.0.3 ping statistics --- 00:22:47.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.771 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:22:47.771 04:06:22 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:47.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:47.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:22:47.771 00:22:47.771 --- 10.0.0.1 ping statistics --- 00:22:47.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.771 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:22:47.771 04:06:22 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:47.771 04:06:22 -- nvmf/common.sh@421 -- # return 0 00:22:47.771 04:06:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:47.771 04:06:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:47.771 04:06:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:47.771 04:06:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:47.771 04:06:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:47.771 04:06:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:47.771 04:06:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:47.771 04:06:22 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:47.771 04:06:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:47.771 04:06:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:47.771 04:06:22 -- common/autotest_common.sh@10 -- # set +x 00:22:47.771 04:06:22 -- nvmf/common.sh@469 -- # nvmfpid=86236 00:22:47.771 04:06:22 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:47.771 04:06:22 -- nvmf/common.sh@470 -- # waitforlisten 86236 00:22:47.771 04:06:22 -- common/autotest_common.sh@829 -- # '[' -z 86236 ']' 00:22:47.771 04:06:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.771 04:06:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:47.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.771 04:06:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.771 04:06:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:47.771 04:06:22 -- common/autotest_common.sh@10 -- # set +x 00:22:47.771 [2024-11-08 04:06:22.725728] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:47.771 [2024-11-08 04:06:22.725804] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:47.771 [2024-11-08 04:06:22.867050] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.030 [2024-11-08 04:06:22.970357] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:48.030 [2024-11-08 04:06:22.970577] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:48.030 [2024-11-08 04:06:22.970596] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:48.030 [2024-11-08 04:06:22.970608] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:48.030 [2024-11-08 04:06:22.970642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.966 04:06:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:48.966 04:06:23 -- common/autotest_common.sh@862 -- # return 0 00:22:48.966 04:06:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:48.966 04:06:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:48.966 04:06:23 -- common/autotest_common.sh@10 -- # set +x 00:22:48.966 04:06:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:48.966 04:06:23 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:48.966 04:06:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.966 04:06:23 -- common/autotest_common.sh@10 -- # set +x 00:22:48.966 [2024-11-08 04:06:23.812161] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:48.966 [2024-11-08 04:06:23.820301] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:48.966 null0 00:22:48.966 [2024-11-08 04:06:23.852231] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:48.966 04:06:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.966 04:06:23 -- host/discovery_remove_ifc.sh@59 -- # hostpid=86292 00:22:48.966 04:06:23 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:48.966 04:06:23 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 86292 /tmp/host.sock 00:22:48.966 04:06:23 -- common/autotest_common.sh@829 -- # '[' -z 86292 ']' 00:22:48.966 04:06:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:48.966 04:06:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:48.966 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:48.966 04:06:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:48.966 04:06:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:48.966 04:06:23 -- common/autotest_common.sh@10 -- # set +x 00:22:48.966 [2024-11-08 04:06:23.934680] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:48.966 [2024-11-08 04:06:23.934773] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86292 ] 00:22:49.224 [2024-11-08 04:06:24.076677] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.224 [2024-11-08 04:06:24.197829] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:49.224 [2024-11-08 04:06:24.198034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.160 04:06:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:50.160 04:06:24 -- common/autotest_common.sh@862 -- # return 0 00:22:50.160 04:06:24 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:50.160 04:06:24 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:50.160 04:06:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.160 04:06:24 -- common/autotest_common.sh@10 -- # set +x 00:22:50.160 04:06:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.161 04:06:24 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:50.161 04:06:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.161 04:06:24 -- common/autotest_common.sh@10 -- # set +x 00:22:50.161 04:06:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.161 04:06:25 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:50.161 04:06:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.161 04:06:25 -- common/autotest_common.sh@10 -- # set +x 00:22:51.096 [2024-11-08 04:06:26.075475] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:51.096 [2024-11-08 04:06:26.075503] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:51.096 [2024-11-08 04:06:26.075521] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:51.096 [2024-11-08 04:06:26.161596] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:51.355 [2024-11-08 04:06:26.217338] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:51.355 [2024-11-08 04:06:26.217385] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:51.355 [2024-11-08 04:06:26.217426] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:51.355 [2024-11-08 04:06:26.217444] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:51.355 [2024-11-08 04:06:26.217475] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:51.355 04:06:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.355 04:06:26 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:51.355 04:06:26 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:51.355 04:06:26 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:51.355 [2024-11-08 04:06:26.223793] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xf73840 was disconnected and freed. delete nvme_qpair. 00:22:51.355 04:06:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.355 04:06:26 -- common/autotest_common.sh@10 -- # set +x 00:22:51.355 04:06:26 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:51.355 04:06:26 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:51.355 04:06:26 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:51.355 04:06:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.355 04:06:26 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:51.355 04:06:26 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:22:51.355 04:06:26 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:22:51.355 04:06:26 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:51.355 04:06:26 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:51.355 04:06:26 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:51.355 04:06:26 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:51.355 04:06:26 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:51.355 04:06:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.355 04:06:26 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:51.355 04:06:26 -- common/autotest_common.sh@10 -- # set +x 00:22:51.355 04:06:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.355 04:06:26 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:51.355 04:06:26 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:52.311 04:06:27 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:52.311 04:06:27 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:52.311 04:06:27 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:52.311 04:06:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.311 04:06:27 -- common/autotest_common.sh@10 -- # set +x 00:22:52.311 04:06:27 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:52.311 04:06:27 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:52.311 04:06:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.585 04:06:27 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:52.585 04:06:27 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:53.520 04:06:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:53.520 04:06:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:53.520 04:06:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.520 04:06:28 -- common/autotest_common.sh@10 -- # set +x 00:22:53.520 04:06:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:53.520 04:06:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:53.520 04:06:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:53.520 04:06:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.520 04:06:28 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:53.520 04:06:28 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:54.455 04:06:29 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:54.455 04:06:29 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:54.455 04:06:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.455 04:06:29 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:54.455 04:06:29 -- common/autotest_common.sh@10 -- # set +x 00:22:54.455 04:06:29 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:54.455 04:06:29 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:54.455 04:06:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.455 04:06:29 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:54.455 04:06:29 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:55.827 04:06:30 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:55.827 04:06:30 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:55.827 04:06:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.827 04:06:30 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:55.827 04:06:30 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:55.827 04:06:30 -- common/autotest_common.sh@10 -- # set +x 00:22:55.827 04:06:30 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:55.827 04:06:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.827 04:06:30 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:55.827 04:06:30 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:56.762 04:06:31 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:56.762 04:06:31 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:56.762 04:06:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.762 04:06:31 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:56.762 04:06:31 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:56.762 04:06:31 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:56.762 04:06:31 -- common/autotest_common.sh@10 -- # set +x 00:22:56.762 04:06:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.762 [2024-11-08 04:06:31.645244] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:56.762 [2024-11-08 04:06:31.645326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.762 [2024-11-08 04:06:31.645341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.762 [2024-11-08 04:06:31.645352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.762 [2024-11-08 04:06:31.645362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.762 [2024-11-08 04:06:31.645371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.762 [2024-11-08 04:06:31.645380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.762 [2024-11-08 04:06:31.645388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.762 [2024-11-08 04:06:31.645397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.762 [2024-11-08 04:06:31.645405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.762 [2024-11-08 04:06:31.645413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.762 [2024-11-08 04:06:31.645421] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeea9f0 is same with the state(5) to be set 00:22:56.762 04:06:31 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:56.762 04:06:31 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:56.762 [2024-11-08 04:06:31.655241] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeea9f0 (9): Bad file descriptor 00:22:56.762 [2024-11-08 04:06:31.665261] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:57.697 04:06:32 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:57.697 04:06:32 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:57.697 04:06:32 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:57.697 04:06:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.697 04:06:32 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:57.697 04:06:32 -- common/autotest_common.sh@10 -- # set +x 00:22:57.697 04:06:32 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:57.697 [2024-11-08 04:06:32.712537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:58.631 [2024-11-08 04:06:33.736549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:58.631 [2024-11-08 04:06:33.736647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeea9f0 with addr=10.0.0.2, port=4420 00:22:58.631 [2024-11-08 04:06:33.736681] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeea9f0 is same with the state(5) to be set 00:22:58.631 [2024-11-08 04:06:33.736728] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:58.631 [2024-11-08 04:06:33.736750] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:58.631 [2024-11-08 04:06:33.736771] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:58.631 [2024-11-08 04:06:33.736792] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:58.631 [2024-11-08 04:06:33.737598] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeea9f0 (9): Bad file descriptor 00:22:58.631 [2024-11-08 04:06:33.737673] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.631 [2024-11-08 04:06:33.737726] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:58.631 [2024-11-08 04:06:33.737794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.631 [2024-11-08 04:06:33.737824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.631 [2024-11-08 04:06:33.737850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.631 [2024-11-08 04:06:33.737873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.631 [2024-11-08 04:06:33.737896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.631 [2024-11-08 04:06:33.737924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.631 [2024-11-08 04:06:33.737947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.631 [2024-11-08 04:06:33.737968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.631 [2024-11-08 04:06:33.737990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.631 [2024-11-08 04:06:33.738011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.631 [2024-11-08 04:06:33.738032] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:58.631 [2024-11-08 04:06:33.738094] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeeae00 (9): Bad file descriptor 00:22:58.631 [2024-11-08 04:06:33.739093] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:58.631 [2024-11-08 04:06:33.739161] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:58.890 04:06:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.890 04:06:33 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:58.890 04:06:33 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:59.825 04:06:34 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:59.825 04:06:34 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:59.825 04:06:34 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:59.825 04:06:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.825 04:06:34 -- common/autotest_common.sh@10 -- # set +x 00:22:59.825 04:06:34 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:59.825 04:06:34 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:59.825 04:06:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.825 04:06:34 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:59.825 04:06:34 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:59.826 04:06:34 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:59.826 04:06:34 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:59.826 04:06:34 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:59.826 04:06:34 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:59.826 04:06:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.826 04:06:34 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:59.826 04:06:34 -- common/autotest_common.sh@10 -- # set +x 00:22:59.826 04:06:34 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:59.826 04:06:34 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:59.826 04:06:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.826 04:06:34 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:59.826 04:06:34 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:00.761 [2024-11-08 04:06:35.746692] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:00.761 [2024-11-08 04:06:35.746719] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:00.761 [2024-11-08 04:06:35.746735] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:00.761 [2024-11-08 04:06:35.832799] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:01.020 [2024-11-08 04:06:35.887748] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:01.020 [2024-11-08 04:06:35.887793] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:01.020 [2024-11-08 04:06:35.887816] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:01.020 [2024-11-08 04:06:35.887831] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:01.020 [2024-11-08 04:06:35.887840] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:01.020 [2024-11-08 04:06:35.895209] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xf2e080 was disconnected and freed. delete nvme_qpair. 00:23:01.020 04:06:35 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:01.020 04:06:35 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:01.020 04:06:35 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:01.020 04:06:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.020 04:06:35 -- common/autotest_common.sh@10 -- # set +x 00:23:01.020 04:06:35 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:01.020 04:06:35 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:01.020 04:06:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.020 04:06:35 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:01.020 04:06:35 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:01.020 04:06:35 -- host/discovery_remove_ifc.sh@90 -- # killprocess 86292 00:23:01.020 04:06:35 -- common/autotest_common.sh@936 -- # '[' -z 86292 ']' 00:23:01.020 04:06:35 -- common/autotest_common.sh@940 -- # kill -0 86292 00:23:01.020 04:06:35 -- common/autotest_common.sh@941 -- # uname 00:23:01.020 04:06:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:01.020 04:06:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86292 00:23:01.020 killing process with pid 86292 00:23:01.020 04:06:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:01.020 04:06:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:01.020 04:06:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86292' 00:23:01.020 04:06:35 -- common/autotest_common.sh@955 -- # kill 86292 00:23:01.020 04:06:35 -- common/autotest_common.sh@960 -- # wait 86292 00:23:01.278 04:06:36 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:01.278 04:06:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:01.278 04:06:36 -- nvmf/common.sh@116 -- # sync 00:23:01.278 04:06:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:01.278 04:06:36 -- nvmf/common.sh@119 -- # set +e 00:23:01.278 04:06:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:01.278 04:06:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:01.278 rmmod nvme_tcp 00:23:01.278 rmmod nvme_fabrics 00:23:01.278 rmmod nvme_keyring 00:23:01.537 04:06:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:01.537 04:06:36 -- nvmf/common.sh@123 -- # set -e 00:23:01.537 04:06:36 -- nvmf/common.sh@124 -- # return 0 00:23:01.537 04:06:36 -- nvmf/common.sh@477 -- # '[' -n 86236 ']' 00:23:01.537 04:06:36 -- nvmf/common.sh@478 -- # killprocess 86236 00:23:01.537 04:06:36 -- common/autotest_common.sh@936 -- # '[' -z 86236 ']' 00:23:01.537 04:06:36 -- common/autotest_common.sh@940 -- # kill -0 86236 00:23:01.537 04:06:36 -- common/autotest_common.sh@941 -- # uname 00:23:01.537 04:06:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:01.537 04:06:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86236 00:23:01.537 killing process with pid 86236 00:23:01.537 04:06:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:01.537 04:06:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:01.537 04:06:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86236' 00:23:01.537 04:06:36 -- common/autotest_common.sh@955 -- # kill 86236 00:23:01.537 04:06:36 -- common/autotest_common.sh@960 -- # wait 86236 00:23:01.795 04:06:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:01.796 04:06:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:01.796 04:06:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:01.796 04:06:36 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:01.796 04:06:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:01.796 04:06:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:01.796 04:06:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:01.796 04:06:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.796 04:06:36 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:01.796 00:23:01.796 real 0m14.616s 00:23:01.796 user 0m25.049s 00:23:01.796 sys 0m1.600s 00:23:01.796 04:06:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:01.796 04:06:36 -- common/autotest_common.sh@10 -- # set +x 00:23:01.796 ************************************ 00:23:01.796 END TEST nvmf_discovery_remove_ifc 00:23:01.796 ************************************ 00:23:01.796 04:06:36 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:23:01.796 04:06:36 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:01.796 04:06:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:01.796 04:06:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:01.796 04:06:36 -- common/autotest_common.sh@10 -- # set +x 00:23:01.796 ************************************ 00:23:01.796 START TEST nvmf_digest 00:23:01.796 ************************************ 00:23:01.796 04:06:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:01.796 * Looking for test storage... 00:23:01.796 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:01.796 04:06:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:01.796 04:06:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:01.796 04:06:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:02.054 04:06:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:02.054 04:06:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:02.054 04:06:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:02.054 04:06:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:02.055 04:06:36 -- scripts/common.sh@335 -- # IFS=.-: 00:23:02.055 04:06:36 -- scripts/common.sh@335 -- # read -ra ver1 00:23:02.055 04:06:36 -- scripts/common.sh@336 -- # IFS=.-: 00:23:02.055 04:06:36 -- scripts/common.sh@336 -- # read -ra ver2 00:23:02.055 04:06:36 -- scripts/common.sh@337 -- # local 'op=<' 00:23:02.055 04:06:36 -- scripts/common.sh@339 -- # ver1_l=2 00:23:02.055 04:06:36 -- scripts/common.sh@340 -- # ver2_l=1 00:23:02.055 04:06:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:02.055 04:06:36 -- scripts/common.sh@343 -- # case "$op" in 00:23:02.055 04:06:36 -- scripts/common.sh@344 -- # : 1 00:23:02.055 04:06:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:02.055 04:06:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:02.055 04:06:36 -- scripts/common.sh@364 -- # decimal 1 00:23:02.055 04:06:36 -- scripts/common.sh@352 -- # local d=1 00:23:02.055 04:06:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:02.055 04:06:36 -- scripts/common.sh@354 -- # echo 1 00:23:02.055 04:06:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:02.055 04:06:36 -- scripts/common.sh@365 -- # decimal 2 00:23:02.055 04:06:36 -- scripts/common.sh@352 -- # local d=2 00:23:02.055 04:06:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:02.055 04:06:36 -- scripts/common.sh@354 -- # echo 2 00:23:02.055 04:06:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:02.055 04:06:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:02.055 04:06:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:02.055 04:06:36 -- scripts/common.sh@367 -- # return 0 00:23:02.055 04:06:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:02.055 04:06:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:02.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.055 --rc genhtml_branch_coverage=1 00:23:02.055 --rc genhtml_function_coverage=1 00:23:02.055 --rc genhtml_legend=1 00:23:02.055 --rc geninfo_all_blocks=1 00:23:02.055 --rc geninfo_unexecuted_blocks=1 00:23:02.055 00:23:02.055 ' 00:23:02.055 04:06:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:02.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.055 --rc genhtml_branch_coverage=1 00:23:02.055 --rc genhtml_function_coverage=1 00:23:02.055 --rc genhtml_legend=1 00:23:02.055 --rc geninfo_all_blocks=1 00:23:02.055 --rc geninfo_unexecuted_blocks=1 00:23:02.055 00:23:02.055 ' 00:23:02.055 04:06:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:02.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.055 --rc genhtml_branch_coverage=1 00:23:02.055 --rc genhtml_function_coverage=1 00:23:02.055 --rc genhtml_legend=1 00:23:02.055 --rc geninfo_all_blocks=1 00:23:02.055 --rc geninfo_unexecuted_blocks=1 00:23:02.055 00:23:02.055 ' 00:23:02.055 04:06:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:02.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.055 --rc genhtml_branch_coverage=1 00:23:02.055 --rc genhtml_function_coverage=1 00:23:02.055 --rc genhtml_legend=1 00:23:02.055 --rc geninfo_all_blocks=1 00:23:02.055 --rc geninfo_unexecuted_blocks=1 00:23:02.055 00:23:02.055 ' 00:23:02.055 04:06:36 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:02.055 04:06:36 -- nvmf/common.sh@7 -- # uname -s 00:23:02.055 04:06:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:02.055 04:06:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:02.055 04:06:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:02.055 04:06:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:02.055 04:06:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:02.055 04:06:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:02.055 04:06:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:02.055 04:06:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:02.055 04:06:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:02.055 04:06:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:02.055 04:06:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:23:02.055 04:06:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:23:02.055 04:06:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:02.055 04:06:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:02.055 04:06:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:02.055 04:06:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:02.055 04:06:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:02.055 04:06:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:02.055 04:06:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:02.055 04:06:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.055 04:06:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.055 04:06:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.055 04:06:36 -- paths/export.sh@5 -- # export PATH 00:23:02.055 04:06:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.055 04:06:36 -- nvmf/common.sh@46 -- # : 0 00:23:02.055 04:06:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:02.055 04:06:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:02.055 04:06:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:02.055 04:06:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:02.055 04:06:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:02.055 04:06:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:02.055 04:06:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:02.055 04:06:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:02.055 04:06:36 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:23:02.055 04:06:36 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:23:02.055 04:06:36 -- host/digest.sh@16 -- # runtime=2 00:23:02.055 04:06:36 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:23:02.055 04:06:36 -- host/digest.sh@132 -- # nvmftestinit 00:23:02.055 04:06:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:02.055 04:06:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:02.055 04:06:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:02.055 04:06:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:02.055 04:06:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:02.055 04:06:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.055 04:06:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:02.055 04:06:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.055 04:06:36 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:02.055 04:06:36 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:02.055 04:06:36 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:02.055 04:06:36 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:02.055 04:06:36 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:02.055 04:06:36 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:02.055 04:06:36 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:02.055 04:06:36 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:02.055 04:06:36 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:02.055 04:06:36 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:02.055 04:06:36 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:02.055 04:06:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:02.055 04:06:36 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:02.055 04:06:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:02.055 04:06:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:02.055 04:06:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:02.055 04:06:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:02.055 04:06:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:02.055 04:06:36 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:02.055 04:06:36 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:02.055 Cannot find device "nvmf_tgt_br" 00:23:02.055 04:06:37 -- nvmf/common.sh@154 -- # true 00:23:02.055 04:06:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:02.055 Cannot find device "nvmf_tgt_br2" 00:23:02.055 04:06:37 -- nvmf/common.sh@155 -- # true 00:23:02.055 04:06:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:02.055 04:06:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:02.055 Cannot find device "nvmf_tgt_br" 00:23:02.055 04:06:37 -- nvmf/common.sh@157 -- # true 00:23:02.055 04:06:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:02.055 Cannot find device "nvmf_tgt_br2" 00:23:02.055 04:06:37 -- nvmf/common.sh@158 -- # true 00:23:02.055 04:06:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:02.055 04:06:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:02.055 04:06:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:02.055 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:02.055 04:06:37 -- nvmf/common.sh@161 -- # true 00:23:02.055 04:06:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:02.056 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:02.056 04:06:37 -- nvmf/common.sh@162 -- # true 00:23:02.056 04:06:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:02.056 04:06:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:02.056 04:06:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:02.056 04:06:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:02.056 04:06:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:02.056 04:06:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:02.314 04:06:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:02.314 04:06:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:02.314 04:06:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:02.314 04:06:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:02.314 04:06:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:02.314 04:06:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:02.314 04:06:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:02.315 04:06:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:02.315 04:06:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:02.315 04:06:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:02.315 04:06:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:02.315 04:06:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:02.315 04:06:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:02.315 04:06:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:02.315 04:06:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:02.315 04:06:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:02.315 04:06:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:02.315 04:06:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:02.315 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:02.315 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:23:02.315 00:23:02.315 --- 10.0.0.2 ping statistics --- 00:23:02.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.315 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:23:02.315 04:06:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:02.315 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:02.315 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:23:02.315 00:23:02.315 --- 10.0.0.3 ping statistics --- 00:23:02.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.315 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:23:02.315 04:06:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:02.315 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:02.315 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:23:02.315 00:23:02.315 --- 10.0.0.1 ping statistics --- 00:23:02.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.315 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:23:02.315 04:06:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:02.315 04:06:37 -- nvmf/common.sh@421 -- # return 0 00:23:02.315 04:06:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:02.315 04:06:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:02.315 04:06:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:02.315 04:06:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:02.315 04:06:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:02.315 04:06:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:02.315 04:06:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:02.315 04:06:37 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:02.315 04:06:37 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:23:02.315 04:06:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:02.315 04:06:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:02.315 04:06:37 -- common/autotest_common.sh@10 -- # set +x 00:23:02.315 ************************************ 00:23:02.315 START TEST nvmf_digest_clean 00:23:02.315 ************************************ 00:23:02.315 04:06:37 -- common/autotest_common.sh@1114 -- # run_digest 00:23:02.315 04:06:37 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:23:02.315 04:06:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:02.315 04:06:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:02.315 04:06:37 -- common/autotest_common.sh@10 -- # set +x 00:23:02.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.315 04:06:37 -- nvmf/common.sh@469 -- # nvmfpid=86712 00:23:02.315 04:06:37 -- nvmf/common.sh@470 -- # waitforlisten 86712 00:23:02.315 04:06:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:02.315 04:06:37 -- common/autotest_common.sh@829 -- # '[' -z 86712 ']' 00:23:02.315 04:06:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.315 04:06:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:02.315 04:06:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.315 04:06:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:02.315 04:06:37 -- common/autotest_common.sh@10 -- # set +x 00:23:02.315 [2024-11-08 04:06:37.379798] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:02.315 [2024-11-08 04:06:37.380076] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:02.573 [2024-11-08 04:06:37.522380] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.573 [2024-11-08 04:06:37.634631] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:02.573 [2024-11-08 04:06:37.634817] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:02.574 [2024-11-08 04:06:37.634836] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:02.574 [2024-11-08 04:06:37.634849] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:02.574 [2024-11-08 04:06:37.634891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.509 04:06:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:03.509 04:06:38 -- common/autotest_common.sh@862 -- # return 0 00:23:03.509 04:06:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:03.509 04:06:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:03.509 04:06:38 -- common/autotest_common.sh@10 -- # set +x 00:23:03.509 04:06:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:03.509 04:06:38 -- host/digest.sh@120 -- # common_target_config 00:23:03.509 04:06:38 -- host/digest.sh@43 -- # rpc_cmd 00:23:03.509 04:06:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.509 04:06:38 -- common/autotest_common.sh@10 -- # set +x 00:23:03.509 null0 00:23:03.509 [2024-11-08 04:06:38.547155] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:03.509 [2024-11-08 04:06:38.571269] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:03.509 04:06:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.509 04:06:38 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:23:03.509 04:06:38 -- host/digest.sh@77 -- # local rw bs qd 00:23:03.509 04:06:38 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:03.509 04:06:38 -- host/digest.sh@80 -- # rw=randread 00:23:03.509 04:06:38 -- host/digest.sh@80 -- # bs=4096 00:23:03.509 04:06:38 -- host/digest.sh@80 -- # qd=128 00:23:03.509 04:06:38 -- host/digest.sh@82 -- # bperfpid=86762 00:23:03.509 04:06:38 -- host/digest.sh@83 -- # waitforlisten 86762 /var/tmp/bperf.sock 00:23:03.509 04:06:38 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:03.509 04:06:38 -- common/autotest_common.sh@829 -- # '[' -z 86762 ']' 00:23:03.509 04:06:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:03.510 04:06:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:03.510 04:06:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:03.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:03.510 04:06:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:03.510 04:06:38 -- common/autotest_common.sh@10 -- # set +x 00:23:03.768 [2024-11-08 04:06:38.622886] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:03.768 [2024-11-08 04:06:38.623115] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86762 ] 00:23:03.768 [2024-11-08 04:06:38.757981] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.768 [2024-11-08 04:06:38.854941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:04.704 04:06:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:04.704 04:06:39 -- common/autotest_common.sh@862 -- # return 0 00:23:04.704 04:06:39 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:23:04.704 04:06:39 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:23:04.704 04:06:39 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:04.964 04:06:39 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:04.964 04:06:39 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:05.222 nvme0n1 00:23:05.222 04:06:40 -- host/digest.sh@91 -- # bperf_py perform_tests 00:23:05.222 04:06:40 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:05.480 Running I/O for 2 seconds... 00:23:07.384 00:23:07.384 Latency(us) 00:23:07.384 [2024-11-08T04:06:42.495Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.384 [2024-11-08T04:06:42.495Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:07.384 nvme0n1 : 2.00 23720.80 92.66 0.00 0.00 5391.61 2502.28 15609.48 00:23:07.384 [2024-11-08T04:06:42.495Z] =================================================================================================================== 00:23:07.384 [2024-11-08T04:06:42.495Z] Total : 23720.80 92.66 0.00 0.00 5391.61 2502.28 15609.48 00:23:07.384 0 00:23:07.384 04:06:42 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:23:07.384 04:06:42 -- host/digest.sh@92 -- # get_accel_stats 00:23:07.384 04:06:42 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:07.384 04:06:42 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:07.384 04:06:42 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:07.384 | select(.opcode=="crc32c") 00:23:07.384 | "\(.module_name) \(.executed)"' 00:23:07.643 04:06:42 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:23:07.643 04:06:42 -- host/digest.sh@93 -- # exp_module=software 00:23:07.643 04:06:42 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:23:07.643 04:06:42 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:07.643 04:06:42 -- host/digest.sh@97 -- # killprocess 86762 00:23:07.643 04:06:42 -- common/autotest_common.sh@936 -- # '[' -z 86762 ']' 00:23:07.643 04:06:42 -- common/autotest_common.sh@940 -- # kill -0 86762 00:23:07.643 04:06:42 -- common/autotest_common.sh@941 -- # uname 00:23:07.643 04:06:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:07.643 04:06:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86762 00:23:07.643 killing process with pid 86762 00:23:07.643 Received shutdown signal, test time was about 2.000000 seconds 00:23:07.643 00:23:07.643 Latency(us) 00:23:07.643 [2024-11-08T04:06:42.754Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.643 [2024-11-08T04:06:42.754Z] =================================================================================================================== 00:23:07.643 [2024-11-08T04:06:42.754Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:07.643 04:06:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:07.643 04:06:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:07.643 04:06:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86762' 00:23:07.643 04:06:42 -- common/autotest_common.sh@955 -- # kill 86762 00:23:07.643 04:06:42 -- common/autotest_common.sh@960 -- # wait 86762 00:23:07.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:07.902 04:06:42 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:23:07.902 04:06:42 -- host/digest.sh@77 -- # local rw bs qd 00:23:07.902 04:06:42 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:07.902 04:06:42 -- host/digest.sh@80 -- # rw=randread 00:23:07.902 04:06:42 -- host/digest.sh@80 -- # bs=131072 00:23:07.902 04:06:42 -- host/digest.sh@80 -- # qd=16 00:23:07.902 04:06:42 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:07.902 04:06:42 -- host/digest.sh@82 -- # bperfpid=86851 00:23:07.902 04:06:42 -- host/digest.sh@83 -- # waitforlisten 86851 /var/tmp/bperf.sock 00:23:07.902 04:06:42 -- common/autotest_common.sh@829 -- # '[' -z 86851 ']' 00:23:07.902 04:06:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:07.902 04:06:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:07.902 04:06:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:07.902 04:06:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:07.902 04:06:42 -- common/autotest_common.sh@10 -- # set +x 00:23:07.902 [2024-11-08 04:06:42.981364] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:07.902 [2024-11-08 04:06:42.981682] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86851 ] 00:23:07.902 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:07.902 Zero copy mechanism will not be used. 00:23:08.161 [2024-11-08 04:06:43.112636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.161 [2024-11-08 04:06:43.196233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.140 04:06:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:09.140 04:06:43 -- common/autotest_common.sh@862 -- # return 0 00:23:09.140 04:06:43 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:23:09.140 04:06:43 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:23:09.140 04:06:43 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:09.140 04:06:44 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:09.140 04:06:44 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:09.707 nvme0n1 00:23:09.707 04:06:44 -- host/digest.sh@91 -- # bperf_py perform_tests 00:23:09.707 04:06:44 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:09.707 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:09.707 Zero copy mechanism will not be used. 00:23:09.707 Running I/O for 2 seconds... 00:23:11.609 00:23:11.609 Latency(us) 00:23:11.609 [2024-11-08T04:06:46.720Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.609 [2024-11-08T04:06:46.720Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:11.609 nvme0n1 : 2.00 9117.73 1139.72 0.00 0.00 1752.26 673.98 10545.34 00:23:11.609 [2024-11-08T04:06:46.720Z] =================================================================================================================== 00:23:11.609 [2024-11-08T04:06:46.720Z] Total : 9117.73 1139.72 0.00 0.00 1752.26 673.98 10545.34 00:23:11.609 0 00:23:11.868 04:06:46 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:23:11.868 04:06:46 -- host/digest.sh@92 -- # get_accel_stats 00:23:11.868 04:06:46 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:11.868 | select(.opcode=="crc32c") 00:23:11.868 | "\(.module_name) \(.executed)"' 00:23:11.868 04:06:46 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:11.868 04:06:46 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:12.127 04:06:46 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:23:12.127 04:06:46 -- host/digest.sh@93 -- # exp_module=software 00:23:12.127 04:06:46 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:23:12.127 04:06:47 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:12.127 04:06:47 -- host/digest.sh@97 -- # killprocess 86851 00:23:12.127 04:06:47 -- common/autotest_common.sh@936 -- # '[' -z 86851 ']' 00:23:12.127 04:06:47 -- common/autotest_common.sh@940 -- # kill -0 86851 00:23:12.127 04:06:47 -- common/autotest_common.sh@941 -- # uname 00:23:12.127 04:06:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:12.127 04:06:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86851 00:23:12.127 killing process with pid 86851 00:23:12.127 Received shutdown signal, test time was about 2.000000 seconds 00:23:12.127 00:23:12.127 Latency(us) 00:23:12.127 [2024-11-08T04:06:47.238Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.127 [2024-11-08T04:06:47.238Z] =================================================================================================================== 00:23:12.127 [2024-11-08T04:06:47.238Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:12.127 04:06:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:12.127 04:06:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:12.127 04:06:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86851' 00:23:12.127 04:06:47 -- common/autotest_common.sh@955 -- # kill 86851 00:23:12.127 04:06:47 -- common/autotest_common.sh@960 -- # wait 86851 00:23:12.387 04:06:47 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:23:12.387 04:06:47 -- host/digest.sh@77 -- # local rw bs qd 00:23:12.387 04:06:47 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:12.387 04:06:47 -- host/digest.sh@80 -- # rw=randwrite 00:23:12.387 04:06:47 -- host/digest.sh@80 -- # bs=4096 00:23:12.387 04:06:47 -- host/digest.sh@80 -- # qd=128 00:23:12.387 04:06:47 -- host/digest.sh@82 -- # bperfpid=86937 00:23:12.387 04:06:47 -- host/digest.sh@83 -- # waitforlisten 86937 /var/tmp/bperf.sock 00:23:12.387 04:06:47 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:12.387 04:06:47 -- common/autotest_common.sh@829 -- # '[' -z 86937 ']' 00:23:12.387 04:06:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:12.387 04:06:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:12.387 04:06:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:12.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:12.387 04:06:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:12.387 04:06:47 -- common/autotest_common.sh@10 -- # set +x 00:23:12.387 [2024-11-08 04:06:47.312358] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:12.387 [2024-11-08 04:06:47.312658] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86937 ] 00:23:12.387 [2024-11-08 04:06:47.449374] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.645 [2024-11-08 04:06:47.526470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:13.212 04:06:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:13.212 04:06:48 -- common/autotest_common.sh@862 -- # return 0 00:23:13.212 04:06:48 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:23:13.212 04:06:48 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:23:13.212 04:06:48 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:13.779 04:06:48 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:13.779 04:06:48 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:14.037 nvme0n1 00:23:14.038 04:06:48 -- host/digest.sh@91 -- # bperf_py perform_tests 00:23:14.038 04:06:48 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:14.038 Running I/O for 2 seconds... 00:23:15.940 00:23:15.940 Latency(us) 00:23:15.940 [2024-11-08T04:06:51.051Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.940 [2024-11-08T04:06:51.051Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:15.940 nvme0n1 : 2.00 28738.39 112.26 0.00 0.00 4448.71 1846.92 14060.45 00:23:15.940 [2024-11-08T04:06:51.051Z] =================================================================================================================== 00:23:15.940 [2024-11-08T04:06:51.051Z] Total : 28738.39 112.26 0.00 0.00 4448.71 1846.92 14060.45 00:23:15.940 0 00:23:16.199 04:06:51 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:23:16.199 04:06:51 -- host/digest.sh@92 -- # get_accel_stats 00:23:16.199 04:06:51 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:16.199 04:06:51 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:16.199 | select(.opcode=="crc32c") 00:23:16.199 | "\(.module_name) \(.executed)"' 00:23:16.199 04:06:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:16.199 04:06:51 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:23:16.199 04:06:51 -- host/digest.sh@93 -- # exp_module=software 00:23:16.199 04:06:51 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:23:16.199 04:06:51 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:16.199 04:06:51 -- host/digest.sh@97 -- # killprocess 86937 00:23:16.199 04:06:51 -- common/autotest_common.sh@936 -- # '[' -z 86937 ']' 00:23:16.199 04:06:51 -- common/autotest_common.sh@940 -- # kill -0 86937 00:23:16.199 04:06:51 -- common/autotest_common.sh@941 -- # uname 00:23:16.199 04:06:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:16.199 04:06:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86937 00:23:16.458 04:06:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:16.458 04:06:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:16.458 killing process with pid 86937 00:23:16.458 04:06:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86937' 00:23:16.458 Received shutdown signal, test time was about 2.000000 seconds 00:23:16.458 00:23:16.458 Latency(us) 00:23:16.458 [2024-11-08T04:06:51.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.458 [2024-11-08T04:06:51.569Z] =================================================================================================================== 00:23:16.458 [2024-11-08T04:06:51.569Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:16.458 04:06:51 -- common/autotest_common.sh@955 -- # kill 86937 00:23:16.458 04:06:51 -- common/autotest_common.sh@960 -- # wait 86937 00:23:16.458 04:06:51 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:23:16.458 04:06:51 -- host/digest.sh@77 -- # local rw bs qd 00:23:16.458 04:06:51 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:16.458 04:06:51 -- host/digest.sh@80 -- # rw=randwrite 00:23:16.458 04:06:51 -- host/digest.sh@80 -- # bs=131072 00:23:16.458 04:06:51 -- host/digest.sh@80 -- # qd=16 00:23:16.458 04:06:51 -- host/digest.sh@82 -- # bperfpid=87034 00:23:16.458 04:06:51 -- host/digest.sh@83 -- # waitforlisten 87034 /var/tmp/bperf.sock 00:23:16.458 04:06:51 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:16.458 04:06:51 -- common/autotest_common.sh@829 -- # '[' -z 87034 ']' 00:23:16.458 04:06:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:16.458 04:06:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:16.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:16.458 04:06:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:16.458 04:06:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:16.458 04:06:51 -- common/autotest_common.sh@10 -- # set +x 00:23:16.717 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:16.717 Zero copy mechanism will not be used. 00:23:16.717 [2024-11-08 04:06:51.599064] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:16.717 [2024-11-08 04:06:51.599164] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87034 ] 00:23:16.717 [2024-11-08 04:06:51.735162] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.717 [2024-11-08 04:06:51.808280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.653 04:06:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:17.653 04:06:52 -- common/autotest_common.sh@862 -- # return 0 00:23:17.653 04:06:52 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:23:17.653 04:06:52 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:23:17.653 04:06:52 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:17.912 04:06:52 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:17.912 04:06:52 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:18.171 nvme0n1 00:23:18.171 04:06:53 -- host/digest.sh@91 -- # bperf_py perform_tests 00:23:18.171 04:06:53 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:18.171 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:18.171 Zero copy mechanism will not be used. 00:23:18.171 Running I/O for 2 seconds... 00:23:20.704 00:23:20.704 Latency(us) 00:23:20.704 [2024-11-08T04:06:55.815Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.704 [2024-11-08T04:06:55.815Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:20.704 nvme0n1 : 2.00 7792.24 974.03 0.00 0.00 2049.26 1765.00 8936.73 00:23:20.704 [2024-11-08T04:06:55.815Z] =================================================================================================================== 00:23:20.704 [2024-11-08T04:06:55.815Z] Total : 7792.24 974.03 0.00 0.00 2049.26 1765.00 8936.73 00:23:20.704 0 00:23:20.704 04:06:55 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:23:20.704 04:06:55 -- host/digest.sh@92 -- # get_accel_stats 00:23:20.704 04:06:55 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:20.704 04:06:55 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:20.704 | select(.opcode=="crc32c") 00:23:20.704 | "\(.module_name) \(.executed)"' 00:23:20.704 04:06:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:20.704 04:06:55 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:23:20.704 04:06:55 -- host/digest.sh@93 -- # exp_module=software 00:23:20.704 04:06:55 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:23:20.704 04:06:55 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:20.704 04:06:55 -- host/digest.sh@97 -- # killprocess 87034 00:23:20.704 04:06:55 -- common/autotest_common.sh@936 -- # '[' -z 87034 ']' 00:23:20.704 04:06:55 -- common/autotest_common.sh@940 -- # kill -0 87034 00:23:20.704 04:06:55 -- common/autotest_common.sh@941 -- # uname 00:23:20.704 04:06:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:20.704 04:06:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87034 00:23:20.704 killing process with pid 87034 00:23:20.704 Received shutdown signal, test time was about 2.000000 seconds 00:23:20.704 00:23:20.704 Latency(us) 00:23:20.704 [2024-11-08T04:06:55.815Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.704 [2024-11-08T04:06:55.815Z] =================================================================================================================== 00:23:20.704 [2024-11-08T04:06:55.815Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:20.704 04:06:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:20.704 04:06:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:20.704 04:06:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87034' 00:23:20.704 04:06:55 -- common/autotest_common.sh@955 -- # kill 87034 00:23:20.704 04:06:55 -- common/autotest_common.sh@960 -- # wait 87034 00:23:20.704 04:06:55 -- host/digest.sh@126 -- # killprocess 86712 00:23:20.704 04:06:55 -- common/autotest_common.sh@936 -- # '[' -z 86712 ']' 00:23:20.704 04:06:55 -- common/autotest_common.sh@940 -- # kill -0 86712 00:23:20.704 04:06:55 -- common/autotest_common.sh@941 -- # uname 00:23:20.704 04:06:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:20.704 04:06:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86712 00:23:20.704 killing process with pid 86712 00:23:20.704 04:06:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:20.704 04:06:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:20.704 04:06:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86712' 00:23:20.704 04:06:55 -- common/autotest_common.sh@955 -- # kill 86712 00:23:20.704 04:06:55 -- common/autotest_common.sh@960 -- # wait 86712 00:23:20.962 ************************************ 00:23:20.962 00:23:20.962 real 0m18.736s 00:23:20.962 user 0m34.426s 00:23:20.962 sys 0m5.449s 00:23:20.962 04:06:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:20.962 04:06:56 -- common/autotest_common.sh@10 -- # set +x 00:23:20.962 END TEST nvmf_digest_clean 00:23:20.962 ************************************ 00:23:21.221 04:06:56 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:23:21.221 04:06:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:21.221 04:06:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:21.221 04:06:56 -- common/autotest_common.sh@10 -- # set +x 00:23:21.221 ************************************ 00:23:21.221 START TEST nvmf_digest_error 00:23:21.221 ************************************ 00:23:21.221 04:06:56 -- common/autotest_common.sh@1114 -- # run_digest_error 00:23:21.221 04:06:56 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:23:21.221 04:06:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:21.221 04:06:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:21.221 04:06:56 -- common/autotest_common.sh@10 -- # set +x 00:23:21.221 04:06:56 -- nvmf/common.sh@469 -- # nvmfpid=87146 00:23:21.221 04:06:56 -- nvmf/common.sh@470 -- # waitforlisten 87146 00:23:21.221 04:06:56 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:21.221 04:06:56 -- common/autotest_common.sh@829 -- # '[' -z 87146 ']' 00:23:21.221 04:06:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.221 04:06:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:21.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.221 04:06:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.221 04:06:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:21.221 04:06:56 -- common/autotest_common.sh@10 -- # set +x 00:23:21.221 [2024-11-08 04:06:56.158896] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:21.221 [2024-11-08 04:06:56.158985] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:21.221 [2024-11-08 04:06:56.293875] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.480 [2024-11-08 04:06:56.374068] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:21.480 [2024-11-08 04:06:56.374406] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:21.480 [2024-11-08 04:06:56.374531] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:21.480 [2024-11-08 04:06:56.374607] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:21.480 [2024-11-08 04:06:56.374724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.047 04:06:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:22.047 04:06:57 -- common/autotest_common.sh@862 -- # return 0 00:23:22.047 04:06:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:22.047 04:06:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:22.047 04:06:57 -- common/autotest_common.sh@10 -- # set +x 00:23:22.305 04:06:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:22.305 04:06:57 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:23:22.305 04:06:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.305 04:06:57 -- common/autotest_common.sh@10 -- # set +x 00:23:22.305 [2024-11-08 04:06:57.187242] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:23:22.305 04:06:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.305 04:06:57 -- host/digest.sh@104 -- # common_target_config 00:23:22.305 04:06:57 -- host/digest.sh@43 -- # rpc_cmd 00:23:22.305 04:06:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.305 04:06:57 -- common/autotest_common.sh@10 -- # set +x 00:23:22.305 null0 00:23:22.305 [2024-11-08 04:06:57.322392] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:22.305 [2024-11-08 04:06:57.346532] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:22.305 04:06:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.305 04:06:57 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:23:22.305 04:06:57 -- host/digest.sh@54 -- # local rw bs qd 00:23:22.305 04:06:57 -- host/digest.sh@56 -- # rw=randread 00:23:22.305 04:06:57 -- host/digest.sh@56 -- # bs=4096 00:23:22.305 04:06:57 -- host/digest.sh@56 -- # qd=128 00:23:22.305 04:06:57 -- host/digest.sh@58 -- # bperfpid=87186 00:23:22.305 04:06:57 -- host/digest.sh@60 -- # waitforlisten 87186 /var/tmp/bperf.sock 00:23:22.305 04:06:57 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:23:22.306 04:06:57 -- common/autotest_common.sh@829 -- # '[' -z 87186 ']' 00:23:22.306 04:06:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:22.306 04:06:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:22.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:22.306 04:06:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:22.306 04:06:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:22.306 04:06:57 -- common/autotest_common.sh@10 -- # set +x 00:23:22.306 [2024-11-08 04:06:57.400909] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:22.306 [2024-11-08 04:06:57.400993] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87186 ] 00:23:22.564 [2024-11-08 04:06:57.535767] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.564 [2024-11-08 04:06:57.644870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:23.499 04:06:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:23.499 04:06:58 -- common/autotest_common.sh@862 -- # return 0 00:23:23.499 04:06:58 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:23.499 04:06:58 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:23.499 04:06:58 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:23.499 04:06:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.499 04:06:58 -- common/autotest_common.sh@10 -- # set +x 00:23:23.499 04:06:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.499 04:06:58 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:23.499 04:06:58 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:23.758 nvme0n1 00:23:23.758 04:06:58 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:23.758 04:06:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.758 04:06:58 -- common/autotest_common.sh@10 -- # set +x 00:23:23.758 04:06:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.758 04:06:58 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:23.758 04:06:58 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:24.016 Running I/O for 2 seconds... 00:23:24.016 [2024-11-08 04:06:58.937059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.016 [2024-11-08 04:06:58.937768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.016 [2024-11-08 04:06:58.937897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.016 [2024-11-08 04:06:58.947939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.016 [2024-11-08 04:06:58.948044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.016 [2024-11-08 04:06:58.948117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.016 [2024-11-08 04:06:58.959477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.016 [2024-11-08 04:06:58.959583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.016 [2024-11-08 04:06:58.959657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.016 [2024-11-08 04:06:58.971369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.017 [2024-11-08 04:06:58.971509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.017 [2024-11-08 04:06:58.971585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.017 [2024-11-08 04:06:58.983758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.017 [2024-11-08 04:06:58.983863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.017 [2024-11-08 04:06:58.983934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.017 [2024-11-08 04:06:58.996007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.017 [2024-11-08 04:06:58.996109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.017 [2024-11-08 04:06:58.996179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.017 [2024-11-08 04:06:59.009116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.017 [2024-11-08 04:06:59.009228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.017 [2024-11-08 04:06:59.009298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.017 [2024-11-08 04:06:59.020620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.017 [2024-11-08 04:06:59.020721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.017 [2024-11-08 04:06:59.020795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.017 [2024-11-08 04:06:59.030585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.017 [2024-11-08 04:06:59.030704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.017 [2024-11-08 04:06:59.030778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.017 [2024-11-08 04:06:59.041662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.017 [2024-11-08 04:06:59.041772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.017 [2024-11-08 04:06:59.041863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.017 [2024-11-08 04:06:59.052853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.017 [2024-11-08 04:06:59.052940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.017 [2024-11-08 04:06:59.053024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.017 [2024-11-08 04:06:59.064232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.017 [2024-11-08 04:06:59.064333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.017 [2024-11-08 04:06:59.064352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.017 [2024-11-08 04:06:59.073308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.017 [2024-11-08 04:06:59.073342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.017 [2024-11-08 04:06:59.073369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.017 [2024-11-08 04:06:59.083538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.017 [2024-11-08 04:06:59.083572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.017 [2024-11-08 04:06:59.083600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.017 [2024-11-08 04:06:59.092679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.017 [2024-11-08 04:06:59.092712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.017 [2024-11-08 04:06:59.092739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.017 [2024-11-08 04:06:59.101932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.017 [2024-11-08 04:06:59.101965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.017 [2024-11-08 04:06:59.101993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.017 [2024-11-08 04:06:59.110909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.017 [2024-11-08 04:06:59.110942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.017 [2024-11-08 04:06:59.110970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.017 [2024-11-08 04:06:59.121937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.017 [2024-11-08 04:06:59.121969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.017 [2024-11-08 04:06:59.121996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.276 [2024-11-08 04:06:59.134437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.276 [2024-11-08 04:06:59.134494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.276 [2024-11-08 04:06:59.134507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.276 [2024-11-08 04:06:59.147714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.276 [2024-11-08 04:06:59.147747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.276 [2024-11-08 04:06:59.147773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.276 [2024-11-08 04:06:59.158506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.276 [2024-11-08 04:06:59.158563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.276 [2024-11-08 04:06:59.158575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.276 [2024-11-08 04:06:59.167481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.276 [2024-11-08 04:06:59.167514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.276 [2024-11-08 04:06:59.167541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.276 [2024-11-08 04:06:59.179226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.276 [2024-11-08 04:06:59.179259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.276 [2024-11-08 04:06:59.179286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.276 [2024-11-08 04:06:59.188990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.276 [2024-11-08 04:06:59.189023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.276 [2024-11-08 04:06:59.189049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.276 [2024-11-08 04:06:59.198543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.276 [2024-11-08 04:06:59.198591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.276 [2024-11-08 04:06:59.198603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.276 [2024-11-08 04:06:59.208598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.276 [2024-11-08 04:06:59.208630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.276 [2024-11-08 04:06:59.208656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.276 [2024-11-08 04:06:59.220268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.276 [2024-11-08 04:06:59.220301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.276 [2024-11-08 04:06:59.220327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.276 [2024-11-08 04:06:59.230543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.276 [2024-11-08 04:06:59.230593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.276 [2024-11-08 04:06:59.230605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.276 [2024-11-08 04:06:59.240178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.276 [2024-11-08 04:06:59.240210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.276 [2024-11-08 04:06:59.240237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.276 [2024-11-08 04:06:59.249659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.276 [2024-11-08 04:06:59.249693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.276 [2024-11-08 04:06:59.249704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.276 [2024-11-08 04:06:59.257793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.276 [2024-11-08 04:06:59.257840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.276 [2024-11-08 04:06:59.257867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.276 [2024-11-08 04:06:59.270338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.276 [2024-11-08 04:06:59.270370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.276 [2024-11-08 04:06:59.270397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.276 [2024-11-08 04:06:59.281649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.276 [2024-11-08 04:06:59.281683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.276 [2024-11-08 04:06:59.281694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.276 [2024-11-08 04:06:59.294027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.276 [2024-11-08 04:06:59.294060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.276 [2024-11-08 04:06:59.294087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.276 [2024-11-08 04:06:59.305663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.276 [2024-11-08 04:06:59.305714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.276 [2024-11-08 04:06:59.305726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.276 [2024-11-08 04:06:59.318537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.276 [2024-11-08 04:06:59.318586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.276 [2024-11-08 04:06:59.318597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.276 [2024-11-08 04:06:59.329313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.276 [2024-11-08 04:06:59.329345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.276 [2024-11-08 04:06:59.329372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.276 [2024-11-08 04:06:59.338663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.276 [2024-11-08 04:06:59.338712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.276 [2024-11-08 04:06:59.338724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.276 [2024-11-08 04:06:59.349374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.276 [2024-11-08 04:06:59.349406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.276 [2024-11-08 04:06:59.349443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.276 [2024-11-08 04:06:59.360098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.276 [2024-11-08 04:06:59.360132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.277 [2024-11-08 04:06:59.360158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.277 [2024-11-08 04:06:59.370847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.277 [2024-11-08 04:06:59.370879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.277 [2024-11-08 04:06:59.370906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.277 [2024-11-08 04:06:59.379200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.277 [2024-11-08 04:06:59.379233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.277 [2024-11-08 04:06:59.379260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.536 [2024-11-08 04:06:59.391072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.536 [2024-11-08 04:06:59.391105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.536 [2024-11-08 04:06:59.391131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.536 [2024-11-08 04:06:59.402659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.536 [2024-11-08 04:06:59.402709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.536 [2024-11-08 04:06:59.402720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.536 [2024-11-08 04:06:59.414114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.536 [2024-11-08 04:06:59.414147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.536 [2024-11-08 04:06:59.414174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.536 [2024-11-08 04:06:59.425592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.536 [2024-11-08 04:06:59.425626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.536 [2024-11-08 04:06:59.425638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.536 [2024-11-08 04:06:59.436861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.536 [2024-11-08 04:06:59.436893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.536 [2024-11-08 04:06:59.436919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.536 [2024-11-08 04:06:59.448654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.536 [2024-11-08 04:06:59.448685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.536 [2024-11-08 04:06:59.448712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.536 [2024-11-08 04:06:59.461529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.536 [2024-11-08 04:06:59.461563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.536 [2024-11-08 04:06:59.461575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.536 [2024-11-08 04:06:59.469730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.536 [2024-11-08 04:06:59.469764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.536 [2024-11-08 04:06:59.469775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.536 [2024-11-08 04:06:59.481596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.536 [2024-11-08 04:06:59.481630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.536 [2024-11-08 04:06:59.481642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.536 [2024-11-08 04:06:59.493424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.536 [2024-11-08 04:06:59.493454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.536 [2024-11-08 04:06:59.493488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.536 [2024-11-08 04:06:59.503883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.536 [2024-11-08 04:06:59.503931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.536 [2024-11-08 04:06:59.503973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.536 [2024-11-08 04:06:59.513056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.536 [2024-11-08 04:06:59.513088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.536 [2024-11-08 04:06:59.513115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.536 [2024-11-08 04:06:59.525467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.536 [2024-11-08 04:06:59.525537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.536 [2024-11-08 04:06:59.525564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.536 [2024-11-08 04:06:59.537899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.536 [2024-11-08 04:06:59.537931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.536 [2024-11-08 04:06:59.537957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.536 [2024-11-08 04:06:59.548804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.536 [2024-11-08 04:06:59.548836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.536 [2024-11-08 04:06:59.548862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.536 [2024-11-08 04:06:59.562300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.536 [2024-11-08 04:06:59.562349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.536 [2024-11-08 04:06:59.562376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.536 [2024-11-08 04:06:59.571534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.536 [2024-11-08 04:06:59.571586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.536 [2024-11-08 04:06:59.571616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.536 [2024-11-08 04:06:59.582977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.536 [2024-11-08 04:06:59.583009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.536 [2024-11-08 04:06:59.583036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.536 [2024-11-08 04:06:59.593280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.536 [2024-11-08 04:06:59.593313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.536 [2024-11-08 04:06:59.593340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.536 [2024-11-08 04:06:59.602531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.536 [2024-11-08 04:06:59.602580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.536 [2024-11-08 04:06:59.602591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.536 [2024-11-08 04:06:59.611224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.536 [2024-11-08 04:06:59.611257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.536 [2024-11-08 04:06:59.611284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.536 [2024-11-08 04:06:59.620362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.536 [2024-11-08 04:06:59.620393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.536 [2024-11-08 04:06:59.620421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.536 [2024-11-08 04:06:59.629613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.536 [2024-11-08 04:06:59.629647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.536 [2024-11-08 04:06:59.629659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.536 [2024-11-08 04:06:59.637846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.536 [2024-11-08 04:06:59.637879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.536 [2024-11-08 04:06:59.637906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.796 [2024-11-08 04:06:59.648542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.796 [2024-11-08 04:06:59.648574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.796 [2024-11-08 04:06:59.648600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.796 [2024-11-08 04:06:59.657875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.796 [2024-11-08 04:06:59.657908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.796 [2024-11-08 04:06:59.657934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.796 [2024-11-08 04:06:59.666626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.796 [2024-11-08 04:06:59.666675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.796 [2024-11-08 04:06:59.666687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.796 [2024-11-08 04:06:59.675775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.796 [2024-11-08 04:06:59.675807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.796 [2024-11-08 04:06:59.675834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.796 [2024-11-08 04:06:59.684551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.796 [2024-11-08 04:06:59.684582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.797 [2024-11-08 04:06:59.684609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.797 [2024-11-08 04:06:59.693157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.797 [2024-11-08 04:06:59.693189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.797 [2024-11-08 04:06:59.693216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.797 [2024-11-08 04:06:59.702848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.797 [2024-11-08 04:06:59.702880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.797 [2024-11-08 04:06:59.702907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.797 [2024-11-08 04:06:59.712822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.797 [2024-11-08 04:06:59.712854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.797 [2024-11-08 04:06:59.712881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.797 [2024-11-08 04:06:59.721395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.797 [2024-11-08 04:06:59.721437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.797 [2024-11-08 04:06:59.721464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.797 [2024-11-08 04:06:59.733190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.797 [2024-11-08 04:06:59.733223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.797 [2024-11-08 04:06:59.733249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.797 [2024-11-08 04:06:59.745047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.797 [2024-11-08 04:06:59.745096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.797 [2024-11-08 04:06:59.745123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.797 [2024-11-08 04:06:59.756572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.797 [2024-11-08 04:06:59.756620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.797 [2024-11-08 04:06:59.756647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.797 [2024-11-08 04:06:59.764625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.797 [2024-11-08 04:06:59.764673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.797 [2024-11-08 04:06:59.764700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.797 [2024-11-08 04:06:59.777042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.797 [2024-11-08 04:06:59.777076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.797 [2024-11-08 04:06:59.777103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.797 [2024-11-08 04:06:59.789374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.797 [2024-11-08 04:06:59.789407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.797 [2024-11-08 04:06:59.789444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.797 [2024-11-08 04:06:59.799302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.797 [2024-11-08 04:06:59.799335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.797 [2024-11-08 04:06:59.799361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.797 [2024-11-08 04:06:59.808189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.797 [2024-11-08 04:06:59.808221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.797 [2024-11-08 04:06:59.808248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.797 [2024-11-08 04:06:59.818093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.797 [2024-11-08 04:06:59.818125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.797 [2024-11-08 04:06:59.818152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.797 [2024-11-08 04:06:59.828383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.797 [2024-11-08 04:06:59.828442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.797 [2024-11-08 04:06:59.828456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.797 [2024-11-08 04:06:59.837037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.797 [2024-11-08 04:06:59.837070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.797 [2024-11-08 04:06:59.837097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.797 [2024-11-08 04:06:59.849554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.797 [2024-11-08 04:06:59.849589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.797 [2024-11-08 04:06:59.849601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.797 [2024-11-08 04:06:59.861778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.797 [2024-11-08 04:06:59.861842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.797 [2024-11-08 04:06:59.861869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.797 [2024-11-08 04:06:59.874703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.797 [2024-11-08 04:06:59.874757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.797 [2024-11-08 04:06:59.874769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.797 [2024-11-08 04:06:59.883504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.797 [2024-11-08 04:06:59.883554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.797 [2024-11-08 04:06:59.883582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.797 [2024-11-08 04:06:59.895374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:24.797 [2024-11-08 04:06:59.895448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.797 [2024-11-08 04:06:59.895461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.072 [2024-11-08 04:06:59.908327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.072 [2024-11-08 04:06:59.908377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.072 [2024-11-08 04:06:59.908406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.072 [2024-11-08 04:06:59.921685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.072 [2024-11-08 04:06:59.921725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.072 [2024-11-08 04:06:59.921737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.072 [2024-11-08 04:06:59.934262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.072 [2024-11-08 04:06:59.934313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.072 [2024-11-08 04:06:59.934341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.072 [2024-11-08 04:06:59.946638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.072 [2024-11-08 04:06:59.946686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.072 [2024-11-08 04:06:59.946713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.072 [2024-11-08 04:06:59.959480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.072 [2024-11-08 04:06:59.959528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.072 [2024-11-08 04:06:59.959540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.072 [2024-11-08 04:06:59.969166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.072 [2024-11-08 04:06:59.969214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.072 [2024-11-08 04:06:59.969241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.072 [2024-11-08 04:06:59.979249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.072 [2024-11-08 04:06:59.979297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.072 [2024-11-08 04:06:59.979324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.072 [2024-11-08 04:06:59.991659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.072 [2024-11-08 04:06:59.991709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.072 [2024-11-08 04:06:59.991721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.072 [2024-11-08 04:07:00.002045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.072 [2024-11-08 04:07:00.002087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.072 [2024-11-08 04:07:00.002100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.072 [2024-11-08 04:07:00.014187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.072 [2024-11-08 04:07:00.014238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.072 [2024-11-08 04:07:00.014281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.072 [2024-11-08 04:07:00.024673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.072 [2024-11-08 04:07:00.024738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.072 [2024-11-08 04:07:00.024766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.072 [2024-11-08 04:07:00.036509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.072 [2024-11-08 04:07:00.036552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.072 [2024-11-08 04:07:00.036579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.072 [2024-11-08 04:07:00.046740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.072 [2024-11-08 04:07:00.046788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.072 [2024-11-08 04:07:00.046815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.072 [2024-11-08 04:07:00.057289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.072 [2024-11-08 04:07:00.057343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:25326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.072 [2024-11-08 04:07:00.057355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.072 [2024-11-08 04:07:00.069285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.072 [2024-11-08 04:07:00.069333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.072 [2024-11-08 04:07:00.069360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.072 [2024-11-08 04:07:00.079013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.072 [2024-11-08 04:07:00.079061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.072 [2024-11-08 04:07:00.079088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.072 [2024-11-08 04:07:00.091579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.072 [2024-11-08 04:07:00.091631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.072 [2024-11-08 04:07:00.091643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.072 [2024-11-08 04:07:00.104101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.072 [2024-11-08 04:07:00.104151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.072 [2024-11-08 04:07:00.104178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.072 [2024-11-08 04:07:00.116933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.072 [2024-11-08 04:07:00.116966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.072 [2024-11-08 04:07:00.116993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.072 [2024-11-08 04:07:00.128167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.072 [2024-11-08 04:07:00.128200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.072 [2024-11-08 04:07:00.128227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.072 [2024-11-08 04:07:00.137783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.072 [2024-11-08 04:07:00.137849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.072 [2024-11-08 04:07:00.137875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.072 [2024-11-08 04:07:00.149264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.072 [2024-11-08 04:07:00.149296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.072 [2024-11-08 04:07:00.149323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.072 [2024-11-08 04:07:00.160805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.072 [2024-11-08 04:07:00.160838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.072 [2024-11-08 04:07:00.160864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.342 [2024-11-08 04:07:00.171689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.342 [2024-11-08 04:07:00.171741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.342 [2024-11-08 04:07:00.171754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.342 [2024-11-08 04:07:00.182225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.342 [2024-11-08 04:07:00.182273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.342 [2024-11-08 04:07:00.182300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.342 [2024-11-08 04:07:00.192473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.342 [2024-11-08 04:07:00.192506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.342 [2024-11-08 04:07:00.192533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.342 [2024-11-08 04:07:00.203200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.342 [2024-11-08 04:07:00.203232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.342 [2024-11-08 04:07:00.203259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.342 [2024-11-08 04:07:00.211862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.342 [2024-11-08 04:07:00.211894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.342 [2024-11-08 04:07:00.211921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.342 [2024-11-08 04:07:00.222695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.342 [2024-11-08 04:07:00.222728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.342 [2024-11-08 04:07:00.222754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.342 [2024-11-08 04:07:00.231725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.342 [2024-11-08 04:07:00.231775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.342 [2024-11-08 04:07:00.231786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.342 [2024-11-08 04:07:00.242121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.342 [2024-11-08 04:07:00.242154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.342 [2024-11-08 04:07:00.242181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.342 [2024-11-08 04:07:00.252156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.342 [2024-11-08 04:07:00.252189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.342 [2024-11-08 04:07:00.252216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.342 [2024-11-08 04:07:00.261729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.342 [2024-11-08 04:07:00.261780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.342 [2024-11-08 04:07:00.261792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.342 [2024-11-08 04:07:00.272724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.342 [2024-11-08 04:07:00.272756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.342 [2024-11-08 04:07:00.272783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.342 [2024-11-08 04:07:00.285753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.342 [2024-11-08 04:07:00.285818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.342 [2024-11-08 04:07:00.285845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.342 [2024-11-08 04:07:00.293864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.342 [2024-11-08 04:07:00.293926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.342 [2024-11-08 04:07:00.293953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.342 [2024-11-08 04:07:00.305855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.342 [2024-11-08 04:07:00.305904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.342 [2024-11-08 04:07:00.305947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.342 [2024-11-08 04:07:00.317786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.342 [2024-11-08 04:07:00.317850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.342 [2024-11-08 04:07:00.317878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.342 [2024-11-08 04:07:00.328135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.342 [2024-11-08 04:07:00.328168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.343 [2024-11-08 04:07:00.328194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.343 [2024-11-08 04:07:00.340795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.343 [2024-11-08 04:07:00.340828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.343 [2024-11-08 04:07:00.340855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.343 [2024-11-08 04:07:00.353644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.343 [2024-11-08 04:07:00.353694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.343 [2024-11-08 04:07:00.353706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.343 [2024-11-08 04:07:00.364646] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.343 [2024-11-08 04:07:00.364679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.343 [2024-11-08 04:07:00.364706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.343 [2024-11-08 04:07:00.374270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.343 [2024-11-08 04:07:00.374303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.343 [2024-11-08 04:07:00.374331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.343 [2024-11-08 04:07:00.385645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.343 [2024-11-08 04:07:00.385696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.343 [2024-11-08 04:07:00.385709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.343 [2024-11-08 04:07:00.396791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.343 [2024-11-08 04:07:00.396839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.343 [2024-11-08 04:07:00.396868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.343 [2024-11-08 04:07:00.408700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.343 [2024-11-08 04:07:00.408733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.343 [2024-11-08 04:07:00.408760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.343 [2024-11-08 04:07:00.421156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.343 [2024-11-08 04:07:00.421189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.343 [2024-11-08 04:07:00.421217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.343 [2024-11-08 04:07:00.432753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.343 [2024-11-08 04:07:00.432786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.343 [2024-11-08 04:07:00.432812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.343 [2024-11-08 04:07:00.442796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.343 [2024-11-08 04:07:00.442829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.343 [2024-11-08 04:07:00.442856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.608 [2024-11-08 04:07:00.454148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.608 [2024-11-08 04:07:00.454198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.608 [2024-11-08 04:07:00.454225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.609 [2024-11-08 04:07:00.466602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.609 [2024-11-08 04:07:00.466634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.609 [2024-11-08 04:07:00.466661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.609 [2024-11-08 04:07:00.477468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.609 [2024-11-08 04:07:00.477525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.609 [2024-11-08 04:07:00.477538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.609 [2024-11-08 04:07:00.486689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.609 [2024-11-08 04:07:00.486721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.609 [2024-11-08 04:07:00.486748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.609 [2024-11-08 04:07:00.495859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.609 [2024-11-08 04:07:00.495892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.609 [2024-11-08 04:07:00.495919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.609 [2024-11-08 04:07:00.505120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.609 [2024-11-08 04:07:00.505153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.609 [2024-11-08 04:07:00.505179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.609 [2024-11-08 04:07:00.515411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.609 [2024-11-08 04:07:00.515468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.609 [2024-11-08 04:07:00.515479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.609 [2024-11-08 04:07:00.525864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.609 [2024-11-08 04:07:00.525927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.609 [2024-11-08 04:07:00.525953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.609 [2024-11-08 04:07:00.537011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.609 [2024-11-08 04:07:00.537044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.609 [2024-11-08 04:07:00.537070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.609 [2024-11-08 04:07:00.546066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.609 [2024-11-08 04:07:00.546097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.609 [2024-11-08 04:07:00.546124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.609 [2024-11-08 04:07:00.555520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.609 [2024-11-08 04:07:00.555570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.609 [2024-11-08 04:07:00.555581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.609 [2024-11-08 04:07:00.565632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.609 [2024-11-08 04:07:00.565683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.609 [2024-11-08 04:07:00.565695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.609 [2024-11-08 04:07:00.575877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.609 [2024-11-08 04:07:00.575911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.609 [2024-11-08 04:07:00.575939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.609 [2024-11-08 04:07:00.585255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.609 [2024-11-08 04:07:00.585306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.609 [2024-11-08 04:07:00.585334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.609 [2024-11-08 04:07:00.597619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.609 [2024-11-08 04:07:00.597675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.609 [2024-11-08 04:07:00.597688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.609 [2024-11-08 04:07:00.609394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.610 [2024-11-08 04:07:00.609473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.610 [2024-11-08 04:07:00.609524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.610 [2024-11-08 04:07:00.619916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.610 [2024-11-08 04:07:00.619949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.610 [2024-11-08 04:07:00.619976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.610 [2024-11-08 04:07:00.629235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.610 [2024-11-08 04:07:00.629268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.610 [2024-11-08 04:07:00.629294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.610 [2024-11-08 04:07:00.639213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.610 [2024-11-08 04:07:00.639246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.610 [2024-11-08 04:07:00.639273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.610 [2024-11-08 04:07:00.649264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.610 [2024-11-08 04:07:00.649296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.610 [2024-11-08 04:07:00.649323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.610 [2024-11-08 04:07:00.659629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.610 [2024-11-08 04:07:00.659662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.610 [2024-11-08 04:07:00.659688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.610 [2024-11-08 04:07:00.669363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.610 [2024-11-08 04:07:00.669395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.610 [2024-11-08 04:07:00.669424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.610 [2024-11-08 04:07:00.680078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.610 [2024-11-08 04:07:00.680111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.610 [2024-11-08 04:07:00.680138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.610 [2024-11-08 04:07:00.689453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.610 [2024-11-08 04:07:00.689521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.610 [2024-11-08 04:07:00.689549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.610 [2024-11-08 04:07:00.700404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.610 [2024-11-08 04:07:00.700445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.610 [2024-11-08 04:07:00.700472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.610 [2024-11-08 04:07:00.710234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.610 [2024-11-08 04:07:00.710283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.610 [2024-11-08 04:07:00.710310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.872 [2024-11-08 04:07:00.718894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.872 [2024-11-08 04:07:00.718926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.872 [2024-11-08 04:07:00.718953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.872 [2024-11-08 04:07:00.728642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.872 [2024-11-08 04:07:00.728674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.872 [2024-11-08 04:07:00.728700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.872 [2024-11-08 04:07:00.738468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.872 [2024-11-08 04:07:00.738511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.872 [2024-11-08 04:07:00.738539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.872 [2024-11-08 04:07:00.747685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.872 [2024-11-08 04:07:00.747719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.872 [2024-11-08 04:07:00.747745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.872 [2024-11-08 04:07:00.758325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.872 [2024-11-08 04:07:00.758358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.872 [2024-11-08 04:07:00.758384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.872 [2024-11-08 04:07:00.768684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.872 [2024-11-08 04:07:00.768716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.872 [2024-11-08 04:07:00.768743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.872 [2024-11-08 04:07:00.779610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.872 [2024-11-08 04:07:00.779644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.872 [2024-11-08 04:07:00.779670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.872 [2024-11-08 04:07:00.790579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.872 [2024-11-08 04:07:00.790612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.872 [2024-11-08 04:07:00.790639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.872 [2024-11-08 04:07:00.800076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.872 [2024-11-08 04:07:00.800109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.872 [2024-11-08 04:07:00.800136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.872 [2024-11-08 04:07:00.812332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.872 [2024-11-08 04:07:00.812365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.872 [2024-11-08 04:07:00.812376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.872 [2024-11-08 04:07:00.821852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.872 [2024-11-08 04:07:00.821914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.872 [2024-11-08 04:07:00.821941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.872 [2024-11-08 04:07:00.830331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.872 [2024-11-08 04:07:00.830364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.872 [2024-11-08 04:07:00.830391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.872 [2024-11-08 04:07:00.841342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.872 [2024-11-08 04:07:00.841375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.872 [2024-11-08 04:07:00.841401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.872 [2024-11-08 04:07:00.852879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.872 [2024-11-08 04:07:00.852914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.872 [2024-11-08 04:07:00.852940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.872 [2024-11-08 04:07:00.862407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.872 [2024-11-08 04:07:00.862450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.872 [2024-11-08 04:07:00.862478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.872 [2024-11-08 04:07:00.872848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.872 [2024-11-08 04:07:00.872881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.872 [2024-11-08 04:07:00.872907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.872 [2024-11-08 04:07:00.882488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.872 [2024-11-08 04:07:00.882520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.872 [2024-11-08 04:07:00.882546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.872 [2024-11-08 04:07:00.892549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.872 [2024-11-08 04:07:00.892582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.872 [2024-11-08 04:07:00.892609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.872 [2024-11-08 04:07:00.900859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.872 [2024-11-08 04:07:00.900892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.872 [2024-11-08 04:07:00.900918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.872 [2024-11-08 04:07:00.912243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.872 [2024-11-08 04:07:00.912276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.872 [2024-11-08 04:07:00.912303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.872 [2024-11-08 04:07:00.923442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef2f50) 00:23:25.872 [2024-11-08 04:07:00.923475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.872 [2024-11-08 04:07:00.923501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.872 00:23:25.872 Latency(us) 00:23:25.872 [2024-11-08T04:07:00.983Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.872 [2024-11-08T04:07:00.983Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:25.872 nvme0n1 : 2.01 23776.78 92.88 0.00 0.00 5378.23 2234.18 16681.89 00:23:25.872 [2024-11-08T04:07:00.983Z] =================================================================================================================== 00:23:25.872 [2024-11-08T04:07:00.983Z] Total : 23776.78 92.88 0.00 0.00 5378.23 2234.18 16681.89 00:23:25.872 0 00:23:25.872 04:07:00 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:25.872 04:07:00 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:25.872 | .driver_specific 00:23:25.872 | .nvme_error 00:23:25.872 | .status_code 00:23:25.872 | .command_transient_transport_error' 00:23:25.872 04:07:00 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:25.872 04:07:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:26.131 04:07:01 -- host/digest.sh@71 -- # (( 187 > 0 )) 00:23:26.131 04:07:01 -- host/digest.sh@73 -- # killprocess 87186 00:23:26.131 04:07:01 -- common/autotest_common.sh@936 -- # '[' -z 87186 ']' 00:23:26.131 04:07:01 -- common/autotest_common.sh@940 -- # kill -0 87186 00:23:26.131 04:07:01 -- common/autotest_common.sh@941 -- # uname 00:23:26.131 04:07:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:26.131 04:07:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87186 00:23:26.131 04:07:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:26.131 04:07:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:26.131 killing process with pid 87186 00:23:26.131 04:07:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87186' 00:23:26.131 04:07:01 -- common/autotest_common.sh@955 -- # kill 87186 00:23:26.131 Received shutdown signal, test time was about 2.000000 seconds 00:23:26.131 00:23:26.131 Latency(us) 00:23:26.131 [2024-11-08T04:07:01.242Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.131 [2024-11-08T04:07:01.242Z] =================================================================================================================== 00:23:26.131 [2024-11-08T04:07:01.242Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:26.131 04:07:01 -- common/autotest_common.sh@960 -- # wait 87186 00:23:26.389 04:07:01 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:23:26.389 04:07:01 -- host/digest.sh@54 -- # local rw bs qd 00:23:26.389 04:07:01 -- host/digest.sh@56 -- # rw=randread 00:23:26.389 04:07:01 -- host/digest.sh@56 -- # bs=131072 00:23:26.389 04:07:01 -- host/digest.sh@56 -- # qd=16 00:23:26.389 04:07:01 -- host/digest.sh@58 -- # bperfpid=87278 00:23:26.389 04:07:01 -- host/digest.sh@60 -- # waitforlisten 87278 /var/tmp/bperf.sock 00:23:26.389 04:07:01 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:23:26.389 04:07:01 -- common/autotest_common.sh@829 -- # '[' -z 87278 ']' 00:23:26.389 04:07:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:26.389 04:07:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:26.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:26.389 04:07:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:26.389 04:07:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:26.389 04:07:01 -- common/autotest_common.sh@10 -- # set +x 00:23:26.648 [2024-11-08 04:07:01.517335] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:26.648 [2024-11-08 04:07:01.517468] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87278 ] 00:23:26.648 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:26.648 Zero copy mechanism will not be used. 00:23:26.648 [2024-11-08 04:07:01.652659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.648 [2024-11-08 04:07:01.732853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.584 04:07:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:27.584 04:07:02 -- common/autotest_common.sh@862 -- # return 0 00:23:27.584 04:07:02 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:27.584 04:07:02 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:27.584 04:07:02 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:27.584 04:07:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.584 04:07:02 -- common/autotest_common.sh@10 -- # set +x 00:23:27.584 04:07:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.584 04:07:02 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:27.584 04:07:02 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:27.842 nvme0n1 00:23:27.842 04:07:02 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:27.842 04:07:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.842 04:07:02 -- common/autotest_common.sh@10 -- # set +x 00:23:27.842 04:07:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.842 04:07:02 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:27.842 04:07:02 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:28.101 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:28.101 Zero copy mechanism will not be used. 00:23:28.101 Running I/O for 2 seconds... 00:23:28.101 [2024-11-08 04:07:03.076464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.101 [2024-11-08 04:07:03.076526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.101 [2024-11-08 04:07:03.076540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.101 [2024-11-08 04:07:03.080666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.101 [2024-11-08 04:07:03.080719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.101 [2024-11-08 04:07:03.080731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.101 [2024-11-08 04:07:03.084280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.101 [2024-11-08 04:07:03.084315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.101 [2024-11-08 04:07:03.084342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.101 [2024-11-08 04:07:03.088499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.101 [2024-11-08 04:07:03.088531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.101 [2024-11-08 04:07:03.088558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.101 [2024-11-08 04:07:03.092191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.101 [2024-11-08 04:07:03.092226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.101 [2024-11-08 04:07:03.092253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.101 [2024-11-08 04:07:03.095772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.101 [2024-11-08 04:07:03.095805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.101 [2024-11-08 04:07:03.095831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.101 [2024-11-08 04:07:03.099695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.101 [2024-11-08 04:07:03.099728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.101 [2024-11-08 04:07:03.099755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.101 [2024-11-08 04:07:03.103143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.101 [2024-11-08 04:07:03.103177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.102 [2024-11-08 04:07:03.103204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.102 [2024-11-08 04:07:03.106735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.102 [2024-11-08 04:07:03.106769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.102 [2024-11-08 04:07:03.106795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.102 [2024-11-08 04:07:03.110459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.102 [2024-11-08 04:07:03.110491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.102 [2024-11-08 04:07:03.110518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.102 [2024-11-08 04:07:03.113566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.102 [2024-11-08 04:07:03.113602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.102 [2024-11-08 04:07:03.113614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.102 [2024-11-08 04:07:03.117269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.102 [2024-11-08 04:07:03.117302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.102 [2024-11-08 04:07:03.117328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.102 [2024-11-08 04:07:03.121090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.102 [2024-11-08 04:07:03.121122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.102 [2024-11-08 04:07:03.121149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.102 [2024-11-08 04:07:03.124284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.102 [2024-11-08 04:07:03.124316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.102 [2024-11-08 04:07:03.124343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.102 [2024-11-08 04:07:03.127952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.102 [2024-11-08 04:07:03.127985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.102 [2024-11-08 04:07:03.128012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.102 [2024-11-08 04:07:03.131854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.102 [2024-11-08 04:07:03.131886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.102 [2024-11-08 04:07:03.131912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.102 [2024-11-08 04:07:03.135754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.102 [2024-11-08 04:07:03.135787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.102 [2024-11-08 04:07:03.135814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.102 [2024-11-08 04:07:03.138736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.102 [2024-11-08 04:07:03.138769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.102 [2024-11-08 04:07:03.138796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.102 [2024-11-08 04:07:03.142730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.102 [2024-11-08 04:07:03.142763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.102 [2024-11-08 04:07:03.142789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.102 [2024-11-08 04:07:03.146091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.102 [2024-11-08 04:07:03.146125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.102 [2024-11-08 04:07:03.146151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.102 [2024-11-08 04:07:03.149983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.102 [2024-11-08 04:07:03.150016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.102 [2024-11-08 04:07:03.150042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.102 [2024-11-08 04:07:03.153561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.102 [2024-11-08 04:07:03.153612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.102 [2024-11-08 04:07:03.153624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.102 [2024-11-08 04:07:03.157113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.102 [2024-11-08 04:07:03.157145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.102 [2024-11-08 04:07:03.157172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.102 [2024-11-08 04:07:03.161121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.102 [2024-11-08 04:07:03.161155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.102 [2024-11-08 04:07:03.161181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.102 [2024-11-08 04:07:03.165180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.102 [2024-11-08 04:07:03.165211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.102 [2024-11-08 04:07:03.165239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.102 [2024-11-08 04:07:03.169297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.102 [2024-11-08 04:07:03.169328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.102 [2024-11-08 04:07:03.169354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.102 [2024-11-08 04:07:03.173293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.102 [2024-11-08 04:07:03.173324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.102 [2024-11-08 04:07:03.173350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.102 [2024-11-08 04:07:03.176571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.102 [2024-11-08 04:07:03.176604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.102 [2024-11-08 04:07:03.176631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.102 [2024-11-08 04:07:03.180357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.102 [2024-11-08 04:07:03.180391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.102 [2024-11-08 04:07:03.180419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.102 [2024-11-08 04:07:03.184328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.102 [2024-11-08 04:07:03.184362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.102 [2024-11-08 04:07:03.184388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.102 [2024-11-08 04:07:03.187994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.102 [2024-11-08 04:07:03.188027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.102 [2024-11-08 04:07:03.188054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.102 [2024-11-08 04:07:03.192067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.102 [2024-11-08 04:07:03.192100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.102 [2024-11-08 04:07:03.192127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.102 [2024-11-08 04:07:03.196204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.102 [2024-11-08 04:07:03.196234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.102 [2024-11-08 04:07:03.196261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.102 [2024-11-08 04:07:03.200469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.103 [2024-11-08 04:07:03.200501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.103 [2024-11-08 04:07:03.200528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.103 [2024-11-08 04:07:03.204091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.103 [2024-11-08 04:07:03.204123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.103 [2024-11-08 04:07:03.204149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.103 [2024-11-08 04:07:03.208079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.103 [2024-11-08 04:07:03.208113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.103 [2024-11-08 04:07:03.208140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.363 [2024-11-08 04:07:03.210941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.363 [2024-11-08 04:07:03.210974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.363 [2024-11-08 04:07:03.211000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.363 [2024-11-08 04:07:03.215239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.363 [2024-11-08 04:07:03.215272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.363 [2024-11-08 04:07:03.215298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.363 [2024-11-08 04:07:03.219042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.363 [2024-11-08 04:07:03.219073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.363 [2024-11-08 04:07:03.219100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.363 [2024-11-08 04:07:03.222436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.363 [2024-11-08 04:07:03.222490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.363 [2024-11-08 04:07:03.222502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.363 [2024-11-08 04:07:03.226802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.363 [2024-11-08 04:07:03.226835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.363 [2024-11-08 04:07:03.226862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.363 [2024-11-08 04:07:03.229692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.363 [2024-11-08 04:07:03.229745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.363 [2024-11-08 04:07:03.229757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.363 [2024-11-08 04:07:03.233565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.363 [2024-11-08 04:07:03.233601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.363 [2024-11-08 04:07:03.233613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.363 [2024-11-08 04:07:03.237427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.363 [2024-11-08 04:07:03.237458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.363 [2024-11-08 04:07:03.237507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.363 [2024-11-08 04:07:03.240975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.363 [2024-11-08 04:07:03.241008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.363 [2024-11-08 04:07:03.241034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.363 [2024-11-08 04:07:03.244738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.363 [2024-11-08 04:07:03.244770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.363 [2024-11-08 04:07:03.244797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.363 [2024-11-08 04:07:03.248657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.363 [2024-11-08 04:07:03.248690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.363 [2024-11-08 04:07:03.248716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.363 [2024-11-08 04:07:03.252213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.363 [2024-11-08 04:07:03.252246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.363 [2024-11-08 04:07:03.252273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.363 [2024-11-08 04:07:03.255964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.363 [2024-11-08 04:07:03.255997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.363 [2024-11-08 04:07:03.256024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.363 [2024-11-08 04:07:03.259087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.363 [2024-11-08 04:07:03.259119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.363 [2024-11-08 04:07:03.259146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.363 [2024-11-08 04:07:03.263031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.363 [2024-11-08 04:07:03.263064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.363 [2024-11-08 04:07:03.263090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.363 [2024-11-08 04:07:03.266438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.363 [2024-11-08 04:07:03.266495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.363 [2024-11-08 04:07:03.266506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.363 [2024-11-08 04:07:03.270051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.363 [2024-11-08 04:07:03.270083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.363 [2024-11-08 04:07:03.270109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.363 [2024-11-08 04:07:03.273657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.363 [2024-11-08 04:07:03.273694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.364 [2024-11-08 04:07:03.273707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.364 [2024-11-08 04:07:03.277006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.364 [2024-11-08 04:07:03.277039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.364 [2024-11-08 04:07:03.277066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.364 [2024-11-08 04:07:03.280559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.364 [2024-11-08 04:07:03.280591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.364 [2024-11-08 04:07:03.280617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.364 [2024-11-08 04:07:03.284115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.364 [2024-11-08 04:07:03.284148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.364 [2024-11-08 04:07:03.284175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.364 [2024-11-08 04:07:03.287475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.364 [2024-11-08 04:07:03.287506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.364 [2024-11-08 04:07:03.287532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.364 [2024-11-08 04:07:03.291046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.364 [2024-11-08 04:07:03.291079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.364 [2024-11-08 04:07:03.291105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.364 [2024-11-08 04:07:03.294826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.364 [2024-11-08 04:07:03.294859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.364 [2024-11-08 04:07:03.294886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.364 [2024-11-08 04:07:03.298786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.364 [2024-11-08 04:07:03.298848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.364 [2024-11-08 04:07:03.298873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.364 [2024-11-08 04:07:03.302708] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.364 [2024-11-08 04:07:03.302759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.364 [2024-11-08 04:07:03.302771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.364 [2024-11-08 04:07:03.306364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.364 [2024-11-08 04:07:03.306451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.364 [2024-11-08 04:07:03.306464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.364 [2024-11-08 04:07:03.310013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.364 [2024-11-08 04:07:03.310044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.364 [2024-11-08 04:07:03.310071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.364 [2024-11-08 04:07:03.314078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.364 [2024-11-08 04:07:03.314110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.364 [2024-11-08 04:07:03.314137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.364 [2024-11-08 04:07:03.318012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.364 [2024-11-08 04:07:03.318043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.364 [2024-11-08 04:07:03.318069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.364 [2024-11-08 04:07:03.320910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.364 [2024-11-08 04:07:03.320941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.364 [2024-11-08 04:07:03.320967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.364 [2024-11-08 04:07:03.324580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.364 [2024-11-08 04:07:03.324612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.364 [2024-11-08 04:07:03.324639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.364 [2024-11-08 04:07:03.328867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.364 [2024-11-08 04:07:03.328900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.364 [2024-11-08 04:07:03.328927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.364 [2024-11-08 04:07:03.332128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.364 [2024-11-08 04:07:03.332159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.364 [2024-11-08 04:07:03.332186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.364 [2024-11-08 04:07:03.335601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.364 [2024-11-08 04:07:03.335634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.364 [2024-11-08 04:07:03.335662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.364 [2024-11-08 04:07:03.339107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.364 [2024-11-08 04:07:03.339139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.364 [2024-11-08 04:07:03.339166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.364 [2024-11-08 04:07:03.342445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.364 [2024-11-08 04:07:03.342500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.364 [2024-11-08 04:07:03.342511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.364 [2024-11-08 04:07:03.346344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.364 [2024-11-08 04:07:03.346377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.364 [2024-11-08 04:07:03.346405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.364 [2024-11-08 04:07:03.350157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.364 [2024-11-08 04:07:03.350190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.364 [2024-11-08 04:07:03.350217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.364 [2024-11-08 04:07:03.353859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.364 [2024-11-08 04:07:03.353922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.364 [2024-11-08 04:07:03.353948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.364 [2024-11-08 04:07:03.357342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.364 [2024-11-08 04:07:03.357374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.364 [2024-11-08 04:07:03.357401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.364 [2024-11-08 04:07:03.361001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.364 [2024-11-08 04:07:03.361034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.364 [2024-11-08 04:07:03.361060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.364 [2024-11-08 04:07:03.364635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.364 [2024-11-08 04:07:03.364668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.364 [2024-11-08 04:07:03.364695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.364 [2024-11-08 04:07:03.368250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.365 [2024-11-08 04:07:03.368282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.365 [2024-11-08 04:07:03.368309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.365 [2024-11-08 04:07:03.371307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.365 [2024-11-08 04:07:03.371340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.365 [2024-11-08 04:07:03.371367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.365 [2024-11-08 04:07:03.374742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.365 [2024-11-08 04:07:03.374792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.365 [2024-11-08 04:07:03.374804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.365 [2024-11-08 04:07:03.378126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.365 [2024-11-08 04:07:03.378157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.365 [2024-11-08 04:07:03.378184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.365 [2024-11-08 04:07:03.381936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.365 [2024-11-08 04:07:03.381968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.365 [2024-11-08 04:07:03.381996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.365 [2024-11-08 04:07:03.385364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.365 [2024-11-08 04:07:03.385411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.365 [2024-11-08 04:07:03.385448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.365 [2024-11-08 04:07:03.389023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.365 [2024-11-08 04:07:03.389056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.365 [2024-11-08 04:07:03.389083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.365 [2024-11-08 04:07:03.392444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.365 [2024-11-08 04:07:03.392475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.365 [2024-11-08 04:07:03.392501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.365 [2024-11-08 04:07:03.395945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.365 [2024-11-08 04:07:03.395977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.365 [2024-11-08 04:07:03.396005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.365 [2024-11-08 04:07:03.399728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.365 [2024-11-08 04:07:03.399760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.365 [2024-11-08 04:07:03.399787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.365 [2024-11-08 04:07:03.403394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.365 [2024-11-08 04:07:03.403439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.365 [2024-11-08 04:07:03.403466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.365 [2024-11-08 04:07:03.406902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.365 [2024-11-08 04:07:03.406934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.365 [2024-11-08 04:07:03.406961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.365 [2024-11-08 04:07:03.410496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.365 [2024-11-08 04:07:03.410543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.365 [2024-11-08 04:07:03.410554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.365 [2024-11-08 04:07:03.414198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.365 [2024-11-08 04:07:03.414232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.365 [2024-11-08 04:07:03.414259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.365 [2024-11-08 04:07:03.417275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.365 [2024-11-08 04:07:03.417307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.365 [2024-11-08 04:07:03.417333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.365 [2024-11-08 04:07:03.420946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.365 [2024-11-08 04:07:03.420977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.365 [2024-11-08 04:07:03.421004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.365 [2024-11-08 04:07:03.424502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.365 [2024-11-08 04:07:03.424534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.365 [2024-11-08 04:07:03.424560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.365 [2024-11-08 04:07:03.428530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.365 [2024-11-08 04:07:03.428562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.365 [2024-11-08 04:07:03.428589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.365 [2024-11-08 04:07:03.431709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.365 [2024-11-08 04:07:03.431742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.365 [2024-11-08 04:07:03.431768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.365 [2024-11-08 04:07:03.435005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.365 [2024-11-08 04:07:03.435038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.365 [2024-11-08 04:07:03.435064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.365 [2024-11-08 04:07:03.438461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.365 [2024-11-08 04:07:03.438507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.365 [2024-11-08 04:07:03.438518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.365 [2024-11-08 04:07:03.442399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.365 [2024-11-08 04:07:03.442457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.365 [2024-11-08 04:07:03.442469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.365 [2024-11-08 04:07:03.445743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.365 [2024-11-08 04:07:03.445794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.365 [2024-11-08 04:07:03.445821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.365 [2024-11-08 04:07:03.449865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.365 [2024-11-08 04:07:03.449929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.365 [2024-11-08 04:07:03.449955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.365 [2024-11-08 04:07:03.453354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.365 [2024-11-08 04:07:03.453402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.365 [2024-11-08 04:07:03.453439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.365 [2024-11-08 04:07:03.457274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.365 [2024-11-08 04:07:03.457307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.365 [2024-11-08 04:07:03.457335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.365 [2024-11-08 04:07:03.460872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.366 [2024-11-08 04:07:03.460904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.366 [2024-11-08 04:07:03.460932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.366 [2024-11-08 04:07:03.464134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.366 [2024-11-08 04:07:03.464166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.366 [2024-11-08 04:07:03.464193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.366 [2024-11-08 04:07:03.468228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.366 [2024-11-08 04:07:03.468278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.366 [2024-11-08 04:07:03.468306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.626 [2024-11-08 04:07:03.472053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.626 [2024-11-08 04:07:03.472085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.626 [2024-11-08 04:07:03.472113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.626 [2024-11-08 04:07:03.475959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.626 [2024-11-08 04:07:03.476022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.626 [2024-11-08 04:07:03.476049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.626 [2024-11-08 04:07:03.479995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.626 [2024-11-08 04:07:03.480027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.626 [2024-11-08 04:07:03.480054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.626 [2024-11-08 04:07:03.484155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.626 [2024-11-08 04:07:03.484186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.626 [2024-11-08 04:07:03.484212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.626 [2024-11-08 04:07:03.487825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.626 [2024-11-08 04:07:03.487857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.626 [2024-11-08 04:07:03.487884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.626 [2024-11-08 04:07:03.491465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.626 [2024-11-08 04:07:03.491496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.626 [2024-11-08 04:07:03.491523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.626 [2024-11-08 04:07:03.495447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.626 [2024-11-08 04:07:03.495478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.626 [2024-11-08 04:07:03.495505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.626 [2024-11-08 04:07:03.498986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.626 [2024-11-08 04:07:03.499018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.626 [2024-11-08 04:07:03.499045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.626 [2024-11-08 04:07:03.501905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.626 [2024-11-08 04:07:03.501935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.626 [2024-11-08 04:07:03.501962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.626 [2024-11-08 04:07:03.505665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.626 [2024-11-08 04:07:03.505716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.626 [2024-11-08 04:07:03.505728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.626 [2024-11-08 04:07:03.508771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.626 [2024-11-08 04:07:03.508803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.626 [2024-11-08 04:07:03.508830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.626 [2024-11-08 04:07:03.512550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.626 [2024-11-08 04:07:03.512582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.626 [2024-11-08 04:07:03.512609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.626 [2024-11-08 04:07:03.516392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.626 [2024-11-08 04:07:03.516434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.626 [2024-11-08 04:07:03.516462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.626 [2024-11-08 04:07:03.520273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.626 [2024-11-08 04:07:03.520305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.626 [2024-11-08 04:07:03.520331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.626 [2024-11-08 04:07:03.523738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.626 [2024-11-08 04:07:03.523771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.626 [2024-11-08 04:07:03.523798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.626 [2024-11-08 04:07:03.527666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.626 [2024-11-08 04:07:03.527699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.626 [2024-11-08 04:07:03.527726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.626 [2024-11-08 04:07:03.531358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.626 [2024-11-08 04:07:03.531391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.626 [2024-11-08 04:07:03.531418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.626 [2024-11-08 04:07:03.534881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.626 [2024-11-08 04:07:03.534914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.626 [2024-11-08 04:07:03.534941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.626 [2024-11-08 04:07:03.538578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.626 [2024-11-08 04:07:03.538610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.626 [2024-11-08 04:07:03.538637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.627 [2024-11-08 04:07:03.541829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.627 [2024-11-08 04:07:03.541906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.627 [2024-11-08 04:07:03.541933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.627 [2024-11-08 04:07:03.545727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.627 [2024-11-08 04:07:03.545763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.627 [2024-11-08 04:07:03.545774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.627 [2024-11-08 04:07:03.549826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.627 [2024-11-08 04:07:03.549902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.627 [2024-11-08 04:07:03.549929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.627 [2024-11-08 04:07:03.553253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.627 [2024-11-08 04:07:03.553284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.627 [2024-11-08 04:07:03.553311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.627 [2024-11-08 04:07:03.556405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.627 [2024-11-08 04:07:03.556448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.627 [2024-11-08 04:07:03.556475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.627 [2024-11-08 04:07:03.559514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.627 [2024-11-08 04:07:03.559563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.627 [2024-11-08 04:07:03.559575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.627 [2024-11-08 04:07:03.563445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.627 [2024-11-08 04:07:03.563490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.627 [2024-11-08 04:07:03.563502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.627 [2024-11-08 04:07:03.567301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.627 [2024-11-08 04:07:03.567333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.627 [2024-11-08 04:07:03.567359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.627 [2024-11-08 04:07:03.570778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.627 [2024-11-08 04:07:03.570810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.627 [2024-11-08 04:07:03.570836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.627 [2024-11-08 04:07:03.574572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.627 [2024-11-08 04:07:03.574605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.627 [2024-11-08 04:07:03.574632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.627 [2024-11-08 04:07:03.578198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.627 [2024-11-08 04:07:03.578231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.627 [2024-11-08 04:07:03.578258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.627 [2024-11-08 04:07:03.581929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.627 [2024-11-08 04:07:03.581993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.627 [2024-11-08 04:07:03.582020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.627 [2024-11-08 04:07:03.585696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.627 [2024-11-08 04:07:03.585750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.627 [2024-11-08 04:07:03.585763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.627 [2024-11-08 04:07:03.589684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.627 [2024-11-08 04:07:03.589721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.627 [2024-11-08 04:07:03.589734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.627 [2024-11-08 04:07:03.593606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.627 [2024-11-08 04:07:03.593660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.627 [2024-11-08 04:07:03.593673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.627 [2024-11-08 04:07:03.597391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.627 [2024-11-08 04:07:03.597465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.627 [2024-11-08 04:07:03.597533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.627 [2024-11-08 04:07:03.601919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.627 [2024-11-08 04:07:03.601966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.627 [2024-11-08 04:07:03.601993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.627 [2024-11-08 04:07:03.606060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.627 [2024-11-08 04:07:03.606108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.627 [2024-11-08 04:07:03.606136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.627 [2024-11-08 04:07:03.609867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.627 [2024-11-08 04:07:03.609932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.627 [2024-11-08 04:07:03.609959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.627 [2024-11-08 04:07:03.614133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.627 [2024-11-08 04:07:03.614181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.627 [2024-11-08 04:07:03.614209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.627 [2024-11-08 04:07:03.617932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.627 [2024-11-08 04:07:03.617979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.627 [2024-11-08 04:07:03.618006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.627 [2024-11-08 04:07:03.622154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.627 [2024-11-08 04:07:03.622201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.627 [2024-11-08 04:07:03.622228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.627 [2024-11-08 04:07:03.626263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.627 [2024-11-08 04:07:03.626310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.627 [2024-11-08 04:07:03.626336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.627 [2024-11-08 04:07:03.629746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.627 [2024-11-08 04:07:03.629783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.627 [2024-11-08 04:07:03.629795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.627 [2024-11-08 04:07:03.633296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.627 [2024-11-08 04:07:03.633347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.627 [2024-11-08 04:07:03.633374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.627 [2024-11-08 04:07:03.636962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.627 [2024-11-08 04:07:03.637026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.627 [2024-11-08 04:07:03.637054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.627 [2024-11-08 04:07:03.640915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.627 [2024-11-08 04:07:03.640965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.628 [2024-11-08 04:07:03.640992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.628 [2024-11-08 04:07:03.644770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.628 [2024-11-08 04:07:03.644835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.628 [2024-11-08 04:07:03.644863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.628 [2024-11-08 04:07:03.649103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.628 [2024-11-08 04:07:03.649153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.628 [2024-11-08 04:07:03.649182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.628 [2024-11-08 04:07:03.653427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.628 [2024-11-08 04:07:03.653535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.628 [2024-11-08 04:07:03.653549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.628 [2024-11-08 04:07:03.657824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.628 [2024-11-08 04:07:03.657904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.628 [2024-11-08 04:07:03.657948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.628 [2024-11-08 04:07:03.661809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.628 [2024-11-08 04:07:03.661888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.628 [2024-11-08 04:07:03.661916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.628 [2024-11-08 04:07:03.665141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.628 [2024-11-08 04:07:03.665189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.628 [2024-11-08 04:07:03.665216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.628 [2024-11-08 04:07:03.669048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.628 [2024-11-08 04:07:03.669097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.628 [2024-11-08 04:07:03.669125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.628 [2024-11-08 04:07:03.672728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.628 [2024-11-08 04:07:03.672777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.628 [2024-11-08 04:07:03.672819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.628 [2024-11-08 04:07:03.676597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.628 [2024-11-08 04:07:03.676632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.628 [2024-11-08 04:07:03.676660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.628 [2024-11-08 04:07:03.680273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.628 [2024-11-08 04:07:03.680338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.628 [2024-11-08 04:07:03.680366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.628 [2024-11-08 04:07:03.683903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.628 [2024-11-08 04:07:03.683951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.628 [2024-11-08 04:07:03.683978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.628 [2024-11-08 04:07:03.688094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.628 [2024-11-08 04:07:03.688141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.628 [2024-11-08 04:07:03.688168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.628 [2024-11-08 04:07:03.691701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.628 [2024-11-08 04:07:03.691751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.628 [2024-11-08 04:07:03.691778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.628 [2024-11-08 04:07:03.695451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.628 [2024-11-08 04:07:03.695499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.628 [2024-11-08 04:07:03.695526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.628 [2024-11-08 04:07:03.699250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.628 [2024-11-08 04:07:03.699299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.628 [2024-11-08 04:07:03.699325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.628 [2024-11-08 04:07:03.702712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.628 [2024-11-08 04:07:03.702763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.628 [2024-11-08 04:07:03.702776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.628 [2024-11-08 04:07:03.706910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.628 [2024-11-08 04:07:03.706959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.628 [2024-11-08 04:07:03.706986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.628 [2024-11-08 04:07:03.710864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.628 [2024-11-08 04:07:03.710913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.628 [2024-11-08 04:07:03.710940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.628 [2024-11-08 04:07:03.714342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.628 [2024-11-08 04:07:03.714390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.628 [2024-11-08 04:07:03.714417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.628 [2024-11-08 04:07:03.717767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.628 [2024-11-08 04:07:03.717831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.628 [2024-11-08 04:07:03.717859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.628 [2024-11-08 04:07:03.721346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.628 [2024-11-08 04:07:03.721394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.628 [2024-11-08 04:07:03.721422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.628 [2024-11-08 04:07:03.724833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.628 [2024-11-08 04:07:03.724882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.628 [2024-11-08 04:07:03.724909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.628 [2024-11-08 04:07:03.728176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.628 [2024-11-08 04:07:03.728225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.628 [2024-11-08 04:07:03.728253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.628 [2024-11-08 04:07:03.732367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.628 [2024-11-08 04:07:03.732442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.628 [2024-11-08 04:07:03.732457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.889 [2024-11-08 04:07:03.736284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.889 [2024-11-08 04:07:03.736333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.889 [2024-11-08 04:07:03.736361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.889 [2024-11-08 04:07:03.740009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.889 [2024-11-08 04:07:03.740059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.889 [2024-11-08 04:07:03.740087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.889 [2024-11-08 04:07:03.743608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.889 [2024-11-08 04:07:03.743655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.889 [2024-11-08 04:07:03.743682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.889 [2024-11-08 04:07:03.747781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.889 [2024-11-08 04:07:03.747828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.889 [2024-11-08 04:07:03.747854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.889 [2024-11-08 04:07:03.752230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.889 [2024-11-08 04:07:03.752279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.889 [2024-11-08 04:07:03.752306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.889 [2024-11-08 04:07:03.756069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.889 [2024-11-08 04:07:03.756116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.889 [2024-11-08 04:07:03.756142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.889 [2024-11-08 04:07:03.759723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.889 [2024-11-08 04:07:03.759771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.889 [2024-11-08 04:07:03.759798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.889 [2024-11-08 04:07:03.763105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.889 [2024-11-08 04:07:03.763153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.889 [2024-11-08 04:07:03.763180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.889 [2024-11-08 04:07:03.767232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.889 [2024-11-08 04:07:03.767280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.889 [2024-11-08 04:07:03.767306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.889 [2024-11-08 04:07:03.771043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.889 [2024-11-08 04:07:03.771091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.889 [2024-11-08 04:07:03.771118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.889 [2024-11-08 04:07:03.774903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.889 [2024-11-08 04:07:03.774950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.889 [2024-11-08 04:07:03.774978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.889 [2024-11-08 04:07:03.778815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.889 [2024-11-08 04:07:03.778863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.889 [2024-11-08 04:07:03.778891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.889 [2024-11-08 04:07:03.782900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.889 [2024-11-08 04:07:03.782947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.889 [2024-11-08 04:07:03.782975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.889 [2024-11-08 04:07:03.786891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.889 [2024-11-08 04:07:03.786939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.889 [2024-11-08 04:07:03.786966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.889 [2024-11-08 04:07:03.789527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.889 [2024-11-08 04:07:03.789570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.889 [2024-11-08 04:07:03.789582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.889 [2024-11-08 04:07:03.793676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.889 [2024-11-08 04:07:03.793728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.889 [2024-11-08 04:07:03.793742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.889 [2024-11-08 04:07:03.796944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.889 [2024-11-08 04:07:03.796993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.889 [2024-11-08 04:07:03.797020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.889 [2024-11-08 04:07:03.800496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.889 [2024-11-08 04:07:03.800543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.889 [2024-11-08 04:07:03.800570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.889 [2024-11-08 04:07:03.803999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.889 [2024-11-08 04:07:03.804047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.889 [2024-11-08 04:07:03.804074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.889 [2024-11-08 04:07:03.807786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.889 [2024-11-08 04:07:03.807834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.889 [2024-11-08 04:07:03.807862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.890 [2024-11-08 04:07:03.811564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.890 [2024-11-08 04:07:03.811612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.890 [2024-11-08 04:07:03.811640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.890 [2024-11-08 04:07:03.815800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.890 [2024-11-08 04:07:03.815849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.890 [2024-11-08 04:07:03.815876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.890 [2024-11-08 04:07:03.819542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.890 [2024-11-08 04:07:03.819590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.890 [2024-11-08 04:07:03.819617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.890 [2024-11-08 04:07:03.822914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.890 [2024-11-08 04:07:03.822962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.890 [2024-11-08 04:07:03.822989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.890 [2024-11-08 04:07:03.826250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.890 [2024-11-08 04:07:03.826299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.890 [2024-11-08 04:07:03.826326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.890 [2024-11-08 04:07:03.829984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.890 [2024-11-08 04:07:03.830032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.890 [2024-11-08 04:07:03.830059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.890 [2024-11-08 04:07:03.833465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.890 [2024-11-08 04:07:03.833515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.890 [2024-11-08 04:07:03.833527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.890 [2024-11-08 04:07:03.837397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.890 [2024-11-08 04:07:03.837455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.890 [2024-11-08 04:07:03.837467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.890 [2024-11-08 04:07:03.841366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.890 [2024-11-08 04:07:03.841442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.890 [2024-11-08 04:07:03.841456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.890 [2024-11-08 04:07:03.844358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.890 [2024-11-08 04:07:03.844390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.890 [2024-11-08 04:07:03.844416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.890 [2024-11-08 04:07:03.848111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.890 [2024-11-08 04:07:03.848144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.890 [2024-11-08 04:07:03.848171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.890 [2024-11-08 04:07:03.851816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.890 [2024-11-08 04:07:03.851849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.890 [2024-11-08 04:07:03.851876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.890 [2024-11-08 04:07:03.855665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.890 [2024-11-08 04:07:03.855713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.890 [2024-11-08 04:07:03.855741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.890 [2024-11-08 04:07:03.859071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.890 [2024-11-08 04:07:03.859120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.890 [2024-11-08 04:07:03.859147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.890 [2024-11-08 04:07:03.862826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.890 [2024-11-08 04:07:03.862876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.890 [2024-11-08 04:07:03.862904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.890 [2024-11-08 04:07:03.866873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.890 [2024-11-08 04:07:03.866905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.890 [2024-11-08 04:07:03.866932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.890 [2024-11-08 04:07:03.870318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.890 [2024-11-08 04:07:03.870350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.890 [2024-11-08 04:07:03.870376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.890 [2024-11-08 04:07:03.873780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.890 [2024-11-08 04:07:03.873848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.890 [2024-11-08 04:07:03.873874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.890 [2024-11-08 04:07:03.877868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.890 [2024-11-08 04:07:03.877930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.890 [2024-11-08 04:07:03.877956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.890 [2024-11-08 04:07:03.880939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.890 [2024-11-08 04:07:03.880971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.890 [2024-11-08 04:07:03.880998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.890 [2024-11-08 04:07:03.884291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.890 [2024-11-08 04:07:03.884323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.890 [2024-11-08 04:07:03.884350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.890 [2024-11-08 04:07:03.888229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.890 [2024-11-08 04:07:03.888262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.890 [2024-11-08 04:07:03.888289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.890 [2024-11-08 04:07:03.891666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.890 [2024-11-08 04:07:03.891698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.890 [2024-11-08 04:07:03.891725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.890 [2024-11-08 04:07:03.895491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.890 [2024-11-08 04:07:03.895524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.890 [2024-11-08 04:07:03.895551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.890 [2024-11-08 04:07:03.899157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.890 [2024-11-08 04:07:03.899190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.890 [2024-11-08 04:07:03.899218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.890 [2024-11-08 04:07:03.903175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.890 [2024-11-08 04:07:03.903208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.890 [2024-11-08 04:07:03.903234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.890 [2024-11-08 04:07:03.906758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.890 [2024-11-08 04:07:03.906790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.890 [2024-11-08 04:07:03.906817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.891 [2024-11-08 04:07:03.910272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.891 [2024-11-08 04:07:03.910304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.891 [2024-11-08 04:07:03.910331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.891 [2024-11-08 04:07:03.914198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.891 [2024-11-08 04:07:03.914231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.891 [2024-11-08 04:07:03.914258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.891 [2024-11-08 04:07:03.917363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.891 [2024-11-08 04:07:03.917395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.891 [2024-11-08 04:07:03.917421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.891 [2024-11-08 04:07:03.920876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.891 [2024-11-08 04:07:03.920908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.891 [2024-11-08 04:07:03.920935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.891 [2024-11-08 04:07:03.925016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.891 [2024-11-08 04:07:03.925048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.891 [2024-11-08 04:07:03.925074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.891 [2024-11-08 04:07:03.928501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.891 [2024-11-08 04:07:03.928554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.891 [2024-11-08 04:07:03.928566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.891 [2024-11-08 04:07:03.932198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.891 [2024-11-08 04:07:03.932230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.891 [2024-11-08 04:07:03.932257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.891 [2024-11-08 04:07:03.935897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.891 [2024-11-08 04:07:03.935928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.891 [2024-11-08 04:07:03.935954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.891 [2024-11-08 04:07:03.939815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.891 [2024-11-08 04:07:03.939848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.891 [2024-11-08 04:07:03.939875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.891 [2024-11-08 04:07:03.943308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.891 [2024-11-08 04:07:03.943340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.891 [2024-11-08 04:07:03.943366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.891 [2024-11-08 04:07:03.946387] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.891 [2024-11-08 04:07:03.946443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.891 [2024-11-08 04:07:03.946456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.891 [2024-11-08 04:07:03.949891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.891 [2024-11-08 04:07:03.949922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.891 [2024-11-08 04:07:03.949948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.891 [2024-11-08 04:07:03.953594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.891 [2024-11-08 04:07:03.953646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.891 [2024-11-08 04:07:03.953658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.891 [2024-11-08 04:07:03.956755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.891 [2024-11-08 04:07:03.956804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.891 [2024-11-08 04:07:03.956815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.891 [2024-11-08 04:07:03.960148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.891 [2024-11-08 04:07:03.960180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.891 [2024-11-08 04:07:03.960207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.891 [2024-11-08 04:07:03.963913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.891 [2024-11-08 04:07:03.963946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.891 [2024-11-08 04:07:03.963973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.891 [2024-11-08 04:07:03.967866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.891 [2024-11-08 04:07:03.967897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.891 [2024-11-08 04:07:03.967924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.891 [2024-11-08 04:07:03.971394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.891 [2024-11-08 04:07:03.971437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.891 [2024-11-08 04:07:03.971464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.891 [2024-11-08 04:07:03.974922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.891 [2024-11-08 04:07:03.974954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.891 [2024-11-08 04:07:03.974981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.891 [2024-11-08 04:07:03.978887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.891 [2024-11-08 04:07:03.978918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.891 [2024-11-08 04:07:03.978944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.891 [2024-11-08 04:07:03.983010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.891 [2024-11-08 04:07:03.983041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.891 [2024-11-08 04:07:03.983066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.891 [2024-11-08 04:07:03.986872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.891 [2024-11-08 04:07:03.986905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.891 [2024-11-08 04:07:03.986932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.891 [2024-11-08 04:07:03.990906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.891 [2024-11-08 04:07:03.990938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.891 [2024-11-08 04:07:03.990965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.891 [2024-11-08 04:07:03.994409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:28.891 [2024-11-08 04:07:03.994480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.891 [2024-11-08 04:07:03.994508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.152 [2024-11-08 04:07:03.997698] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.152 [2024-11-08 04:07:03.997750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.152 [2024-11-08 04:07:03.997762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.152 [2024-11-08 04:07:04.001224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.152 [2024-11-08 04:07:04.001256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.152 [2024-11-08 04:07:04.001283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.152 [2024-11-08 04:07:04.004978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.152 [2024-11-08 04:07:04.005011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.152 [2024-11-08 04:07:04.005037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.152 [2024-11-08 04:07:04.008795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.152 [2024-11-08 04:07:04.008827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.152 [2024-11-08 04:07:04.008853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.152 [2024-11-08 04:07:04.013097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.152 [2024-11-08 04:07:04.013129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.152 [2024-11-08 04:07:04.013156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.152 [2024-11-08 04:07:04.016776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.152 [2024-11-08 04:07:04.016809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.152 [2024-11-08 04:07:04.016835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.152 [2024-11-08 04:07:04.021028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.152 [2024-11-08 04:07:04.021061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.152 [2024-11-08 04:07:04.021088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.152 [2024-11-08 04:07:04.025126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.152 [2024-11-08 04:07:04.025157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.152 [2024-11-08 04:07:04.025184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.152 [2024-11-08 04:07:04.029238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.152 [2024-11-08 04:07:04.029269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.152 [2024-11-08 04:07:04.029296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.152 [2024-11-08 04:07:04.033318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.152 [2024-11-08 04:07:04.033348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.152 [2024-11-08 04:07:04.033376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.152 [2024-11-08 04:07:04.037046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.152 [2024-11-08 04:07:04.037078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.152 [2024-11-08 04:07:04.037105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.152 [2024-11-08 04:07:04.040982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.152 [2024-11-08 04:07:04.041015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.152 [2024-11-08 04:07:04.041043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.152 [2024-11-08 04:07:04.044292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.152 [2024-11-08 04:07:04.044323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.152 [2024-11-08 04:07:04.044350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.152 [2024-11-08 04:07:04.048094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.152 [2024-11-08 04:07:04.048127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.152 [2024-11-08 04:07:04.048154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.152 [2024-11-08 04:07:04.051948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.152 [2024-11-08 04:07:04.051980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.152 [2024-11-08 04:07:04.052008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.152 [2024-11-08 04:07:04.055993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.152 [2024-11-08 04:07:04.056025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.152 [2024-11-08 04:07:04.056051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.152 [2024-11-08 04:07:04.060001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.152 [2024-11-08 04:07:04.060034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.152 [2024-11-08 04:07:04.060061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.152 [2024-11-08 04:07:04.063567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.153 [2024-11-08 04:07:04.063600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.153 [2024-11-08 04:07:04.063626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.153 [2024-11-08 04:07:04.066869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.153 [2024-11-08 04:07:04.066901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.153 [2024-11-08 04:07:04.066928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.153 [2024-11-08 04:07:04.070959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.153 [2024-11-08 04:07:04.070991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.153 [2024-11-08 04:07:04.071018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.153 [2024-11-08 04:07:04.075046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.153 [2024-11-08 04:07:04.075077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.153 [2024-11-08 04:07:04.075104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.153 [2024-11-08 04:07:04.079126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.153 [2024-11-08 04:07:04.079157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.153 [2024-11-08 04:07:04.079184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.153 [2024-11-08 04:07:04.083327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.153 [2024-11-08 04:07:04.083359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.153 [2024-11-08 04:07:04.083386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.153 [2024-11-08 04:07:04.086757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.153 [2024-11-08 04:07:04.086805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.153 [2024-11-08 04:07:04.086845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.153 [2024-11-08 04:07:04.090798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.153 [2024-11-08 04:07:04.090830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.153 [2024-11-08 04:07:04.090856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.153 [2024-11-08 04:07:04.094378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.153 [2024-11-08 04:07:04.094410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.153 [2024-11-08 04:07:04.094450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.153 [2024-11-08 04:07:04.097991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.153 [2024-11-08 04:07:04.098024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.153 [2024-11-08 04:07:04.098051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.153 [2024-11-08 04:07:04.101092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.153 [2024-11-08 04:07:04.101124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.153 [2024-11-08 04:07:04.101150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.153 [2024-11-08 04:07:04.104742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.153 [2024-11-08 04:07:04.104774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.153 [2024-11-08 04:07:04.104801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.153 [2024-11-08 04:07:04.108199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.153 [2024-11-08 04:07:04.108231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.153 [2024-11-08 04:07:04.108258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.153 [2024-11-08 04:07:04.112473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.153 [2024-11-08 04:07:04.112505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.153 [2024-11-08 04:07:04.112533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.153 [2024-11-08 04:07:04.116348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.153 [2024-11-08 04:07:04.116381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.153 [2024-11-08 04:07:04.116407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.153 [2024-11-08 04:07:04.120334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.153 [2024-11-08 04:07:04.120366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.153 [2024-11-08 04:07:04.120393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.153 [2024-11-08 04:07:04.124115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.153 [2024-11-08 04:07:04.124147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.153 [2024-11-08 04:07:04.124174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.153 [2024-11-08 04:07:04.126829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.153 [2024-11-08 04:07:04.126861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.153 [2024-11-08 04:07:04.126888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.153 [2024-11-08 04:07:04.130353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.153 [2024-11-08 04:07:04.130386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.153 [2024-11-08 04:07:04.130413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.153 [2024-11-08 04:07:04.133787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.153 [2024-11-08 04:07:04.133850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.153 [2024-11-08 04:07:04.133877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.153 [2024-11-08 04:07:04.137260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.153 [2024-11-08 04:07:04.137292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.153 [2024-11-08 04:07:04.137319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.153 [2024-11-08 04:07:04.141298] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.153 [2024-11-08 04:07:04.141330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.153 [2024-11-08 04:07:04.141358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.153 [2024-11-08 04:07:04.144281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.153 [2024-11-08 04:07:04.144314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.153 [2024-11-08 04:07:04.144340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.153 [2024-11-08 04:07:04.147957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.153 [2024-11-08 04:07:04.147988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.153 [2024-11-08 04:07:04.148014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.153 [2024-11-08 04:07:04.151996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.153 [2024-11-08 04:07:04.152027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.153 [2024-11-08 04:07:04.152054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.153 [2024-11-08 04:07:04.156134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.153 [2024-11-08 04:07:04.156167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.153 [2024-11-08 04:07:04.156194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.153 [2024-11-08 04:07:04.159708] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.153 [2024-11-08 04:07:04.159741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.154 [2024-11-08 04:07:04.159768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.154 [2024-11-08 04:07:04.163390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.154 [2024-11-08 04:07:04.163434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.154 [2024-11-08 04:07:04.163462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.154 [2024-11-08 04:07:04.167004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.154 [2024-11-08 04:07:04.167053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.154 [2024-11-08 04:07:04.167081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.154 [2024-11-08 04:07:04.170392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.154 [2024-11-08 04:07:04.170450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.154 [2024-11-08 04:07:04.170461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.154 [2024-11-08 04:07:04.174231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.154 [2024-11-08 04:07:04.174264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.154 [2024-11-08 04:07:04.174291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.154 [2024-11-08 04:07:04.177442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.154 [2024-11-08 04:07:04.177473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.154 [2024-11-08 04:07:04.177524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.154 [2024-11-08 04:07:04.181342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.154 [2024-11-08 04:07:04.181375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.154 [2024-11-08 04:07:04.181402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.154 [2024-11-08 04:07:04.184593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.154 [2024-11-08 04:07:04.184625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.154 [2024-11-08 04:07:04.184652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.154 [2024-11-08 04:07:04.188053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.154 [2024-11-08 04:07:04.188086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.154 [2024-11-08 04:07:04.188113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.154 [2024-11-08 04:07:04.191734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.154 [2024-11-08 04:07:04.191766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.154 [2024-11-08 04:07:04.191793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.154 [2024-11-08 04:07:04.195442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.154 [2024-11-08 04:07:04.195472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.154 [2024-11-08 04:07:04.195499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.154 [2024-11-08 04:07:04.199361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.154 [2024-11-08 04:07:04.199394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.154 [2024-11-08 04:07:04.199421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.154 [2024-11-08 04:07:04.202959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.154 [2024-11-08 04:07:04.202991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.154 [2024-11-08 04:07:04.203018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.154 [2024-11-08 04:07:04.206887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.154 [2024-11-08 04:07:04.206919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.154 [2024-11-08 04:07:04.206946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.154 [2024-11-08 04:07:04.210604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.154 [2024-11-08 04:07:04.210653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.154 [2024-11-08 04:07:04.210665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.154 [2024-11-08 04:07:04.213798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.154 [2024-11-08 04:07:04.213862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.154 [2024-11-08 04:07:04.213888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.154 [2024-11-08 04:07:04.217211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.154 [2024-11-08 04:07:04.217274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.154 [2024-11-08 04:07:04.217300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.154 [2024-11-08 04:07:04.221072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.154 [2024-11-08 04:07:04.221104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.154 [2024-11-08 04:07:04.221131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.154 [2024-11-08 04:07:04.224187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.154 [2024-11-08 04:07:04.224218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.154 [2024-11-08 04:07:04.224245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.154 [2024-11-08 04:07:04.227770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.154 [2024-11-08 04:07:04.227802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.154 [2024-11-08 04:07:04.227828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.154 [2024-11-08 04:07:04.231792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.154 [2024-11-08 04:07:04.231825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.154 [2024-11-08 04:07:04.231851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.154 [2024-11-08 04:07:04.235074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.154 [2024-11-08 04:07:04.235106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.154 [2024-11-08 04:07:04.235132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.154 [2024-11-08 04:07:04.238812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.154 [2024-11-08 04:07:04.238844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.154 [2024-11-08 04:07:04.238871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.154 [2024-11-08 04:07:04.242759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.154 [2024-11-08 04:07:04.242807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.154 [2024-11-08 04:07:04.242818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.154 [2024-11-08 04:07:04.246808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.154 [2024-11-08 04:07:04.246854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.154 [2024-11-08 04:07:04.246864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.154 [2024-11-08 04:07:04.250672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.154 [2024-11-08 04:07:04.250721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.154 [2024-11-08 04:07:04.250733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.154 [2024-11-08 04:07:04.254069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.154 [2024-11-08 04:07:04.254101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.154 [2024-11-08 04:07:04.254127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.154 [2024-11-08 04:07:04.258299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.154 [2024-11-08 04:07:04.258363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.155 [2024-11-08 04:07:04.258390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.415 [2024-11-08 04:07:04.261937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.415 [2024-11-08 04:07:04.261969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.415 [2024-11-08 04:07:04.261995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.415 [2024-11-08 04:07:04.265846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.415 [2024-11-08 04:07:04.265909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.415 [2024-11-08 04:07:04.265921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.415 [2024-11-08 04:07:04.270146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.415 [2024-11-08 04:07:04.270178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.415 [2024-11-08 04:07:04.270205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.415 [2024-11-08 04:07:04.273447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.415 [2024-11-08 04:07:04.273485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.415 [2024-11-08 04:07:04.273513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.415 [2024-11-08 04:07:04.276709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.415 [2024-11-08 04:07:04.276757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.415 [2024-11-08 04:07:04.276784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.415 [2024-11-08 04:07:04.280783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.415 [2024-11-08 04:07:04.280815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.415 [2024-11-08 04:07:04.280842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.415 [2024-11-08 04:07:04.284135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.415 [2024-11-08 04:07:04.284169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.415 [2024-11-08 04:07:04.284196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.415 [2024-11-08 04:07:04.288242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.415 [2024-11-08 04:07:04.288275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.415 [2024-11-08 04:07:04.288301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.415 [2024-11-08 04:07:04.291942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.415 [2024-11-08 04:07:04.291973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.415 [2024-11-08 04:07:04.291999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.415 [2024-11-08 04:07:04.296028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.415 [2024-11-08 04:07:04.296058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.415 [2024-11-08 04:07:04.296084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.415 [2024-11-08 04:07:04.300245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.415 [2024-11-08 04:07:04.300278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.415 [2024-11-08 04:07:04.300305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.415 [2024-11-08 04:07:04.303795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.415 [2024-11-08 04:07:04.303828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.415 [2024-11-08 04:07:04.303855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.415 [2024-11-08 04:07:04.307739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.415 [2024-11-08 04:07:04.307771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.415 [2024-11-08 04:07:04.307798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.415 [2024-11-08 04:07:04.310888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.415 [2024-11-08 04:07:04.310920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.415 [2024-11-08 04:07:04.310947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.415 [2024-11-08 04:07:04.314379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.415 [2024-11-08 04:07:04.314411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.415 [2024-11-08 04:07:04.314449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.415 [2024-11-08 04:07:04.318071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.415 [2024-11-08 04:07:04.318104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.415 [2024-11-08 04:07:04.318131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.415 [2024-11-08 04:07:04.321790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.415 [2024-11-08 04:07:04.321870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.415 [2024-11-08 04:07:04.321897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.415 [2024-11-08 04:07:04.324897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.415 [2024-11-08 04:07:04.324929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.415 [2024-11-08 04:07:04.324956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.415 [2024-11-08 04:07:04.328702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.416 [2024-11-08 04:07:04.328751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.416 [2024-11-08 04:07:04.328778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.416 [2024-11-08 04:07:04.332948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.416 [2024-11-08 04:07:04.333032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.416 [2024-11-08 04:07:04.333046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.416 [2024-11-08 04:07:04.337373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.416 [2024-11-08 04:07:04.337448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.416 [2024-11-08 04:07:04.337517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.416 [2024-11-08 04:07:04.341256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.416 [2024-11-08 04:07:04.341306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.416 [2024-11-08 04:07:04.341334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.416 [2024-11-08 04:07:04.345424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.416 [2024-11-08 04:07:04.345469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.416 [2024-11-08 04:07:04.345503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.416 [2024-11-08 04:07:04.348944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.416 [2024-11-08 04:07:04.348976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.416 [2024-11-08 04:07:04.349002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.416 [2024-11-08 04:07:04.352365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.416 [2024-11-08 04:07:04.352397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.416 [2024-11-08 04:07:04.352424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.416 [2024-11-08 04:07:04.356361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.416 [2024-11-08 04:07:04.356393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.416 [2024-11-08 04:07:04.356421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.416 [2024-11-08 04:07:04.359835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.416 [2024-11-08 04:07:04.359869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.416 [2024-11-08 04:07:04.359895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.416 [2024-11-08 04:07:04.363322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.416 [2024-11-08 04:07:04.363355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.416 [2024-11-08 04:07:04.363381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.416 [2024-11-08 04:07:04.367076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.416 [2024-11-08 04:07:04.367108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.416 [2024-11-08 04:07:04.367135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.416 [2024-11-08 04:07:04.370957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.416 [2024-11-08 04:07:04.370990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.416 [2024-11-08 04:07:04.371017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.416 [2024-11-08 04:07:04.374088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.416 [2024-11-08 04:07:04.374119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.416 [2024-11-08 04:07:04.374146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.416 [2024-11-08 04:07:04.378047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.416 [2024-11-08 04:07:04.378080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.416 [2024-11-08 04:07:04.378107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.416 [2024-11-08 04:07:04.381195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.416 [2024-11-08 04:07:04.381229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.416 [2024-11-08 04:07:04.381256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.416 [2024-11-08 04:07:04.384475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.416 [2024-11-08 04:07:04.384507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.416 [2024-11-08 04:07:04.384533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.416 [2024-11-08 04:07:04.388853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.416 [2024-11-08 04:07:04.388884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.416 [2024-11-08 04:07:04.388911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.416 [2024-11-08 04:07:04.391856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.416 [2024-11-08 04:07:04.391888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.416 [2024-11-08 04:07:04.391915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.416 [2024-11-08 04:07:04.395621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.416 [2024-11-08 04:07:04.395670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.416 [2024-11-08 04:07:04.395682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.416 [2024-11-08 04:07:04.399695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.416 [2024-11-08 04:07:04.399743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.416 [2024-11-08 04:07:04.399754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.416 [2024-11-08 04:07:04.403268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.416 [2024-11-08 04:07:04.403299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.416 [2024-11-08 04:07:04.403326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.416 [2024-11-08 04:07:04.406782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.416 [2024-11-08 04:07:04.406814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.416 [2024-11-08 04:07:04.406841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.416 [2024-11-08 04:07:04.410357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.416 [2024-11-08 04:07:04.410389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.416 [2024-11-08 04:07:04.410416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.416 [2024-11-08 04:07:04.413868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.416 [2024-11-08 04:07:04.413929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.416 [2024-11-08 04:07:04.413955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.416 [2024-11-08 04:07:04.418069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.416 [2024-11-08 04:07:04.418099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.416 [2024-11-08 04:07:04.418125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.416 [2024-11-08 04:07:04.421748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.416 [2024-11-08 04:07:04.421799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.416 [2024-11-08 04:07:04.421812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.416 [2024-11-08 04:07:04.425838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.417 [2024-11-08 04:07:04.425886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-11-08 04:07:04.425929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.417 [2024-11-08 04:07:04.429742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.417 [2024-11-08 04:07:04.429791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-11-08 04:07:04.429803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.417 [2024-11-08 04:07:04.433568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.417 [2024-11-08 04:07:04.433603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-11-08 04:07:04.433614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.417 [2024-11-08 04:07:04.437203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.417 [2024-11-08 04:07:04.437236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-11-08 04:07:04.437264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.417 [2024-11-08 04:07:04.440698] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.417 [2024-11-08 04:07:04.440730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-11-08 04:07:04.440757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.417 [2024-11-08 04:07:04.444081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.417 [2024-11-08 04:07:04.444114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-11-08 04:07:04.444141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.417 [2024-11-08 04:07:04.447561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.417 [2024-11-08 04:07:04.447611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-11-08 04:07:04.447622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.417 [2024-11-08 04:07:04.451343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.417 [2024-11-08 04:07:04.451376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-11-08 04:07:04.451404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.417 [2024-11-08 04:07:04.455043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.417 [2024-11-08 04:07:04.455075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-11-08 04:07:04.455102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.417 [2024-11-08 04:07:04.458338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.417 [2024-11-08 04:07:04.458371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-11-08 04:07:04.458397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.417 [2024-11-08 04:07:04.461313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.417 [2024-11-08 04:07:04.461345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-11-08 04:07:04.461372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.417 [2024-11-08 04:07:04.464685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.417 [2024-11-08 04:07:04.464717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-11-08 04:07:04.464744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.417 [2024-11-08 04:07:04.468253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.417 [2024-11-08 04:07:04.468285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-11-08 04:07:04.468295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.417 [2024-11-08 04:07:04.471634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.417 [2024-11-08 04:07:04.471684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-11-08 04:07:04.471695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.417 [2024-11-08 04:07:04.475240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.417 [2024-11-08 04:07:04.475274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-11-08 04:07:04.475302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.417 [2024-11-08 04:07:04.479341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.417 [2024-11-08 04:07:04.479374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-11-08 04:07:04.479401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.417 [2024-11-08 04:07:04.483203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.417 [2024-11-08 04:07:04.483237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-11-08 04:07:04.483264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.417 [2024-11-08 04:07:04.486391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.417 [2024-11-08 04:07:04.486433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-11-08 04:07:04.486461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.417 [2024-11-08 04:07:04.490035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.417 [2024-11-08 04:07:04.490068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-11-08 04:07:04.490095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.417 [2024-11-08 04:07:04.493624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.417 [2024-11-08 04:07:04.493674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-11-08 04:07:04.493686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.417 [2024-11-08 04:07:04.497559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.417 [2024-11-08 04:07:04.497593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-11-08 04:07:04.497605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.417 [2024-11-08 04:07:04.500849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.417 [2024-11-08 04:07:04.500881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-11-08 04:07:04.500908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.417 [2024-11-08 04:07:04.504610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.417 [2024-11-08 04:07:04.504657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-11-08 04:07:04.504685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.417 [2024-11-08 04:07:04.508440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.417 [2024-11-08 04:07:04.508472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-11-08 04:07:04.508499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.417 [2024-11-08 04:07:04.511859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.417 [2024-11-08 04:07:04.511891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-11-08 04:07:04.511918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.417 [2024-11-08 04:07:04.515228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.417 [2024-11-08 04:07:04.515260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-11-08 04:07:04.515287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.417 [2024-11-08 04:07:04.519106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.417 [2024-11-08 04:07:04.519138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.418 [2024-11-08 04:07:04.519165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.678 [2024-11-08 04:07:04.522490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.678 [2024-11-08 04:07:04.522547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.678 [2024-11-08 04:07:04.522575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.678 [2024-11-08 04:07:04.525626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.678 [2024-11-08 04:07:04.525663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.678 [2024-11-08 04:07:04.525675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.678 [2024-11-08 04:07:04.529666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.678 [2024-11-08 04:07:04.529701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.678 [2024-11-08 04:07:04.529713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.678 [2024-11-08 04:07:04.533438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.678 [2024-11-08 04:07:04.533469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.678 [2024-11-08 04:07:04.533539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.678 [2024-11-08 04:07:04.536901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.678 [2024-11-08 04:07:04.536932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.678 [2024-11-08 04:07:04.536958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.678 [2024-11-08 04:07:04.540499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.678 [2024-11-08 04:07:04.540531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.678 [2024-11-08 04:07:04.540558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.678 [2024-11-08 04:07:04.544052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.678 [2024-11-08 04:07:04.544083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.678 [2024-11-08 04:07:04.544110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.678 [2024-11-08 04:07:04.548264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.678 [2024-11-08 04:07:04.548297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.678 [2024-11-08 04:07:04.548323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.678 [2024-11-08 04:07:04.551793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.678 [2024-11-08 04:07:04.551827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.678 [2024-11-08 04:07:04.551853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.678 [2024-11-08 04:07:04.555555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.678 [2024-11-08 04:07:04.555604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.679 [2024-11-08 04:07:04.555616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.679 [2024-11-08 04:07:04.559213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.679 [2024-11-08 04:07:04.559247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.679 [2024-11-08 04:07:04.559273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.679 [2024-11-08 04:07:04.562734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.679 [2024-11-08 04:07:04.562766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.679 [2024-11-08 04:07:04.562793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.679 [2024-11-08 04:07:04.566058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.679 [2024-11-08 04:07:04.566090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.679 [2024-11-08 04:07:04.566117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.679 [2024-11-08 04:07:04.569905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.679 [2024-11-08 04:07:04.569938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.679 [2024-11-08 04:07:04.569964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.679 [2024-11-08 04:07:04.573717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.679 [2024-11-08 04:07:04.573768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.679 [2024-11-08 04:07:04.573780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.679 [2024-11-08 04:07:04.576875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.679 [2024-11-08 04:07:04.576908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.679 [2024-11-08 04:07:04.576934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.679 [2024-11-08 04:07:04.580149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.679 [2024-11-08 04:07:04.580182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.679 [2024-11-08 04:07:04.580208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.679 [2024-11-08 04:07:04.583846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.679 [2024-11-08 04:07:04.583877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.679 [2024-11-08 04:07:04.583904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.679 [2024-11-08 04:07:04.587768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.679 [2024-11-08 04:07:04.587830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.679 [2024-11-08 04:07:04.587856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.679 [2024-11-08 04:07:04.591834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.679 [2024-11-08 04:07:04.591864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.679 [2024-11-08 04:07:04.591891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.679 [2024-11-08 04:07:04.595864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.679 [2024-11-08 04:07:04.595896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.679 [2024-11-08 04:07:04.595922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.679 [2024-11-08 04:07:04.600096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.679 [2024-11-08 04:07:04.600128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.679 [2024-11-08 04:07:04.600155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.679 [2024-11-08 04:07:04.603635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.679 [2024-11-08 04:07:04.603684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.679 [2024-11-08 04:07:04.603696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.679 [2024-11-08 04:07:04.607264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.679 [2024-11-08 04:07:04.607296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.679 [2024-11-08 04:07:04.607323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.679 [2024-11-08 04:07:04.610662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.679 [2024-11-08 04:07:04.610694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.679 [2024-11-08 04:07:04.610720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.679 [2024-11-08 04:07:04.614375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.679 [2024-11-08 04:07:04.614407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.679 [2024-11-08 04:07:04.614445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.679 [2024-11-08 04:07:04.617411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.679 [2024-11-08 04:07:04.617451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.679 [2024-11-08 04:07:04.617462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.679 [2024-11-08 04:07:04.621193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.679 [2024-11-08 04:07:04.621226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.679 [2024-11-08 04:07:04.621252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.679 [2024-11-08 04:07:04.625019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.679 [2024-11-08 04:07:04.625052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.679 [2024-11-08 04:07:04.625078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.679 [2024-11-08 04:07:04.628362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.679 [2024-11-08 04:07:04.628395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.679 [2024-11-08 04:07:04.628421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.679 [2024-11-08 04:07:04.631757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.679 [2024-11-08 04:07:04.631806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.679 [2024-11-08 04:07:04.631817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.679 [2024-11-08 04:07:04.635384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.679 [2024-11-08 04:07:04.635441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.679 [2024-11-08 04:07:04.635454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.679 [2024-11-08 04:07:04.638224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.679 [2024-11-08 04:07:04.638255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.679 [2024-11-08 04:07:04.638282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.679 [2024-11-08 04:07:04.642069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.679 [2024-11-08 04:07:04.642101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.679 [2024-11-08 04:07:04.642127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.679 [2024-11-08 04:07:04.646160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.679 [2024-11-08 04:07:04.646193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.679 [2024-11-08 04:07:04.646220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.679 [2024-11-08 04:07:04.649883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.679 [2024-11-08 04:07:04.649915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.679 [2024-11-08 04:07:04.649926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.680 [2024-11-08 04:07:04.652958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.680 [2024-11-08 04:07:04.652991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.680 [2024-11-08 04:07:04.653017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.680 [2024-11-08 04:07:04.656320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.680 [2024-11-08 04:07:04.656353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.680 [2024-11-08 04:07:04.656380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.680 [2024-11-08 04:07:04.660715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.680 [2024-11-08 04:07:04.660765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.680 [2024-11-08 04:07:04.660792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.680 [2024-11-08 04:07:04.664899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.680 [2024-11-08 04:07:04.664949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.680 [2024-11-08 04:07:04.664976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.680 [2024-11-08 04:07:04.668775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.680 [2024-11-08 04:07:04.668839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.680 [2024-11-08 04:07:04.668850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.680 [2024-11-08 04:07:04.672074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.680 [2024-11-08 04:07:04.672106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.680 [2024-11-08 04:07:04.672132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.680 [2024-11-08 04:07:04.676570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.680 [2024-11-08 04:07:04.676622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.680 [2024-11-08 04:07:04.676636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.680 [2024-11-08 04:07:04.680537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.680 [2024-11-08 04:07:04.680588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.680 [2024-11-08 04:07:04.680615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.680 [2024-11-08 04:07:04.684020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.680 [2024-11-08 04:07:04.684052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.680 [2024-11-08 04:07:04.684079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.680 [2024-11-08 04:07:04.688157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.680 [2024-11-08 04:07:04.688187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.680 [2024-11-08 04:07:04.688214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.680 [2024-11-08 04:07:04.692299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.680 [2024-11-08 04:07:04.692331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.680 [2024-11-08 04:07:04.692358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.680 [2024-11-08 04:07:04.696049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.680 [2024-11-08 04:07:04.696081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.680 [2024-11-08 04:07:04.696108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.680 [2024-11-08 04:07:04.699533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.680 [2024-11-08 04:07:04.699566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.680 [2024-11-08 04:07:04.699593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.680 [2024-11-08 04:07:04.703266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.680 [2024-11-08 04:07:04.703299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.680 [2024-11-08 04:07:04.703326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.680 [2024-11-08 04:07:04.706748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.680 [2024-11-08 04:07:04.706780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.680 [2024-11-08 04:07:04.706807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.680 [2024-11-08 04:07:04.710537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.680 [2024-11-08 04:07:04.710569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.680 [2024-11-08 04:07:04.710596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.680 [2024-11-08 04:07:04.714636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.680 [2024-11-08 04:07:04.714669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.680 [2024-11-08 04:07:04.714696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.680 [2024-11-08 04:07:04.717979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.680 [2024-11-08 04:07:04.718011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.680 [2024-11-08 04:07:04.718038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.680 [2024-11-08 04:07:04.721238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.680 [2024-11-08 04:07:04.721270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.680 [2024-11-08 04:07:04.721297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.680 [2024-11-08 04:07:04.724635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.680 [2024-11-08 04:07:04.724668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.680 [2024-11-08 04:07:04.724694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.680 [2024-11-08 04:07:04.728236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.680 [2024-11-08 04:07:04.728268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.680 [2024-11-08 04:07:04.728295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.680 [2024-11-08 04:07:04.732070] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.680 [2024-11-08 04:07:04.732102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.680 [2024-11-08 04:07:04.732129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.680 [2024-11-08 04:07:04.736199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.680 [2024-11-08 04:07:04.736229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.680 [2024-11-08 04:07:04.736256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.680 [2024-11-08 04:07:04.739679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.680 [2024-11-08 04:07:04.739711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.680 [2024-11-08 04:07:04.739737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.680 [2024-11-08 04:07:04.743233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.680 [2024-11-08 04:07:04.743281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.680 [2024-11-08 04:07:04.743309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.680 [2024-11-08 04:07:04.747122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.680 [2024-11-08 04:07:04.747170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.680 [2024-11-08 04:07:04.747197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.681 [2024-11-08 04:07:04.751264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.681 [2024-11-08 04:07:04.751311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.681 [2024-11-08 04:07:04.751337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.681 [2024-11-08 04:07:04.754683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.681 [2024-11-08 04:07:04.754732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.681 [2024-11-08 04:07:04.754759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.681 [2024-11-08 04:07:04.758737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.681 [2024-11-08 04:07:04.758786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.681 [2024-11-08 04:07:04.758814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.681 [2024-11-08 04:07:04.762922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.681 [2024-11-08 04:07:04.762971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.681 [2024-11-08 04:07:04.762998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.681 [2024-11-08 04:07:04.766586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.681 [2024-11-08 04:07:04.766634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.681 [2024-11-08 04:07:04.766662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.681 [2024-11-08 04:07:04.770248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.681 [2024-11-08 04:07:04.770297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.681 [2024-11-08 04:07:04.770324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.681 [2024-11-08 04:07:04.773560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.681 [2024-11-08 04:07:04.773611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.681 [2024-11-08 04:07:04.773624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.681 [2024-11-08 04:07:04.777298] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.681 [2024-11-08 04:07:04.777331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.681 [2024-11-08 04:07:04.777358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.681 [2024-11-08 04:07:04.780139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.681 [2024-11-08 04:07:04.780171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.681 [2024-11-08 04:07:04.780197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.681 [2024-11-08 04:07:04.784227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.681 [2024-11-08 04:07:04.784259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.681 [2024-11-08 04:07:04.784286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.941 [2024-11-08 04:07:04.788732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.941 [2024-11-08 04:07:04.788780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.941 [2024-11-08 04:07:04.788809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.941 [2024-11-08 04:07:04.792448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.941 [2024-11-08 04:07:04.792507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.941 [2024-11-08 04:07:04.792534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.941 [2024-11-08 04:07:04.796274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.941 [2024-11-08 04:07:04.796306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.941 [2024-11-08 04:07:04.796333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.941 [2024-11-08 04:07:04.800366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.941 [2024-11-08 04:07:04.800399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.941 [2024-11-08 04:07:04.800426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.941 [2024-11-08 04:07:04.804324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.941 [2024-11-08 04:07:04.804356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.941 [2024-11-08 04:07:04.804383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.941 [2024-11-08 04:07:04.807243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.941 [2024-11-08 04:07:04.807275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.941 [2024-11-08 04:07:04.807302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.941 [2024-11-08 04:07:04.810536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.941 [2024-11-08 04:07:04.810585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.941 [2024-11-08 04:07:04.810596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.941 [2024-11-08 04:07:04.814394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.941 [2024-11-08 04:07:04.814454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.941 [2024-11-08 04:07:04.814467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.941 [2024-11-08 04:07:04.818286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.941 [2024-11-08 04:07:04.818334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.941 [2024-11-08 04:07:04.818361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.941 [2024-11-08 04:07:04.821856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.941 [2024-11-08 04:07:04.821904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.941 [2024-11-08 04:07:04.821947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.941 [2024-11-08 04:07:04.825291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.941 [2024-11-08 04:07:04.825323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.941 [2024-11-08 04:07:04.825349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.941 [2024-11-08 04:07:04.828900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.941 [2024-11-08 04:07:04.828932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.941 [2024-11-08 04:07:04.828959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.941 [2024-11-08 04:07:04.832800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.941 [2024-11-08 04:07:04.832832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.941 [2024-11-08 04:07:04.832859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.941 [2024-11-08 04:07:04.835867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.941 [2024-11-08 04:07:04.835900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.941 [2024-11-08 04:07:04.835926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.941 [2024-11-08 04:07:04.839711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.941 [2024-11-08 04:07:04.839760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.942 [2024-11-08 04:07:04.839788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.942 [2024-11-08 04:07:04.843220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.942 [2024-11-08 04:07:04.843251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.942 [2024-11-08 04:07:04.843278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.942 [2024-11-08 04:07:04.847483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.942 [2024-11-08 04:07:04.847544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.942 [2024-11-08 04:07:04.847573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.942 [2024-11-08 04:07:04.851267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.942 [2024-11-08 04:07:04.851317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.942 [2024-11-08 04:07:04.851344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.942 [2024-11-08 04:07:04.855412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.942 [2024-11-08 04:07:04.855490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.942 [2024-11-08 04:07:04.855519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.942 [2024-11-08 04:07:04.859161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.942 [2024-11-08 04:07:04.859210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.942 [2024-11-08 04:07:04.859237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.942 [2024-11-08 04:07:04.863544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.942 [2024-11-08 04:07:04.863593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.942 [2024-11-08 04:07:04.863620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.942 [2024-11-08 04:07:04.867908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.942 [2024-11-08 04:07:04.867957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.942 [2024-11-08 04:07:04.867984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.942 [2024-11-08 04:07:04.872216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.942 [2024-11-08 04:07:04.872264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.942 [2024-11-08 04:07:04.872291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.942 [2024-11-08 04:07:04.876219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.942 [2024-11-08 04:07:04.876269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.942 [2024-11-08 04:07:04.876297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.942 [2024-11-08 04:07:04.879796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.942 [2024-11-08 04:07:04.879845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.942 [2024-11-08 04:07:04.879872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.942 [2024-11-08 04:07:04.883792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.942 [2024-11-08 04:07:04.883840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.942 [2024-11-08 04:07:04.883867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.942 [2024-11-08 04:07:04.887134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.942 [2024-11-08 04:07:04.887183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.942 [2024-11-08 04:07:04.887211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.942 [2024-11-08 04:07:04.890997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.942 [2024-11-08 04:07:04.891046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.942 [2024-11-08 04:07:04.891073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.942 [2024-11-08 04:07:04.894887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.942 [2024-11-08 04:07:04.894936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.942 [2024-11-08 04:07:04.894962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.942 [2024-11-08 04:07:04.898736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.942 [2024-11-08 04:07:04.898785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.942 [2024-11-08 04:07:04.898812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.942 [2024-11-08 04:07:04.902488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.942 [2024-11-08 04:07:04.902536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.942 [2024-11-08 04:07:04.902562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.942 [2024-11-08 04:07:04.906046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.942 [2024-11-08 04:07:04.906094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.942 [2024-11-08 04:07:04.906121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.942 [2024-11-08 04:07:04.909326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.942 [2024-11-08 04:07:04.909373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.942 [2024-11-08 04:07:04.909400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.942 [2024-11-08 04:07:04.912459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.942 [2024-11-08 04:07:04.912506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.942 [2024-11-08 04:07:04.912533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.942 [2024-11-08 04:07:04.916671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.942 [2024-11-08 04:07:04.916720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.942 [2024-11-08 04:07:04.916747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.942 [2024-11-08 04:07:04.920239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.942 [2024-11-08 04:07:04.920288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.942 [2024-11-08 04:07:04.920316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.942 [2024-11-08 04:07:04.923780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.942 [2024-11-08 04:07:04.923828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.942 [2024-11-08 04:07:04.923856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.942 [2024-11-08 04:07:04.927139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.942 [2024-11-08 04:07:04.927187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.942 [2024-11-08 04:07:04.927215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.942 [2024-11-08 04:07:04.931060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.942 [2024-11-08 04:07:04.931108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.942 [2024-11-08 04:07:04.931136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.942 [2024-11-08 04:07:04.934573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.942 [2024-11-08 04:07:04.934620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.942 [2024-11-08 04:07:04.934648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.942 [2024-11-08 04:07:04.938153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.942 [2024-11-08 04:07:04.938202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.943 [2024-11-08 04:07:04.938230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.943 [2024-11-08 04:07:04.942485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.943 [2024-11-08 04:07:04.942531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.943 [2024-11-08 04:07:04.942558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.943 [2024-11-08 04:07:04.946160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.943 [2024-11-08 04:07:04.946209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.943 [2024-11-08 04:07:04.946237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.943 [2024-11-08 04:07:04.949392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.943 [2024-11-08 04:07:04.949467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.943 [2024-11-08 04:07:04.949502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.943 [2024-11-08 04:07:04.952876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.943 [2024-11-08 04:07:04.952925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.943 [2024-11-08 04:07:04.952953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.943 [2024-11-08 04:07:04.956433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.943 [2024-11-08 04:07:04.956478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.943 [2024-11-08 04:07:04.956505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.943 [2024-11-08 04:07:04.960079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.943 [2024-11-08 04:07:04.960127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.943 [2024-11-08 04:07:04.960154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.943 [2024-11-08 04:07:04.963846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.943 [2024-11-08 04:07:04.963894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.943 [2024-11-08 04:07:04.963922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.943 [2024-11-08 04:07:04.967458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.943 [2024-11-08 04:07:04.967505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.943 [2024-11-08 04:07:04.967532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.943 [2024-11-08 04:07:04.970946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.943 [2024-11-08 04:07:04.970993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.943 [2024-11-08 04:07:04.971020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.943 [2024-11-08 04:07:04.974790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.943 [2024-11-08 04:07:04.974854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.943 [2024-11-08 04:07:04.974880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.943 [2024-11-08 04:07:04.978698] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.943 [2024-11-08 04:07:04.978746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.943 [2024-11-08 04:07:04.978773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.943 [2024-11-08 04:07:04.982228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.943 [2024-11-08 04:07:04.982276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.943 [2024-11-08 04:07:04.982303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.943 [2024-11-08 04:07:04.986245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.943 [2024-11-08 04:07:04.986296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.943 [2024-11-08 04:07:04.986324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.943 [2024-11-08 04:07:04.990238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.943 [2024-11-08 04:07:04.990285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.943 [2024-11-08 04:07:04.990313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.943 [2024-11-08 04:07:04.994543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.943 [2024-11-08 04:07:04.994592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.943 [2024-11-08 04:07:04.994619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.943 [2024-11-08 04:07:04.998410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.943 [2024-11-08 04:07:04.998468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.943 [2024-11-08 04:07:04.998495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.943 [2024-11-08 04:07:05.002059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.943 [2024-11-08 04:07:05.002106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.943 [2024-11-08 04:07:05.002133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.943 [2024-11-08 04:07:05.004819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.943 [2024-11-08 04:07:05.004867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.943 [2024-11-08 04:07:05.004894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.943 [2024-11-08 04:07:05.009089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.943 [2024-11-08 04:07:05.009137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.943 [2024-11-08 04:07:05.009165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.943 [2024-11-08 04:07:05.013012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.943 [2024-11-08 04:07:05.013061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.943 [2024-11-08 04:07:05.013089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.943 [2024-11-08 04:07:05.017104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.943 [2024-11-08 04:07:05.017152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.943 [2024-11-08 04:07:05.017179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.943 [2024-11-08 04:07:05.020615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.943 [2024-11-08 04:07:05.020664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.943 [2024-11-08 04:07:05.020691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.943 [2024-11-08 04:07:05.024302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.943 [2024-11-08 04:07:05.024351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.943 [2024-11-08 04:07:05.024378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.943 [2024-11-08 04:07:05.028247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.943 [2024-11-08 04:07:05.028295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.943 [2024-11-08 04:07:05.028322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.943 [2024-11-08 04:07:05.032184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.943 [2024-11-08 04:07:05.032233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.943 [2024-11-08 04:07:05.032261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.943 [2024-11-08 04:07:05.035972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.943 [2024-11-08 04:07:05.036019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.943 [2024-11-08 04:07:05.036046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.944 [2024-11-08 04:07:05.039587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.944 [2024-11-08 04:07:05.039638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.944 [2024-11-08 04:07:05.039649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.944 [2024-11-08 04:07:05.043509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.944 [2024-11-08 04:07:05.043575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.944 [2024-11-08 04:07:05.043587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.944 [2024-11-08 04:07:05.047116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:29.944 [2024-11-08 04:07:05.047164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.944 [2024-11-08 04:07:05.047191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.202 [2024-11-08 04:07:05.050379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:30.202 [2024-11-08 04:07:05.050450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.202 [2024-11-08 04:07:05.050464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.202 [2024-11-08 04:07:05.054277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:30.202 [2024-11-08 04:07:05.054327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.202 [2024-11-08 04:07:05.054371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.202 [2024-11-08 04:07:05.058164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:30.202 [2024-11-08 04:07:05.058212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.202 [2024-11-08 04:07:05.058240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.202 [2024-11-08 04:07:05.061542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbb27e0) 00:23:30.202 [2024-11-08 04:07:05.061578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.202 [2024-11-08 04:07:05.061590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.202 00:23:30.202 Latency(us) 00:23:30.202 [2024-11-08T04:07:05.313Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.202 [2024-11-08T04:07:05.313Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:30.202 nvme0n1 : 2.00 8328.98 1041.12 0.00 0.00 1918.08 491.52 11617.75 00:23:30.202 [2024-11-08T04:07:05.313Z] =================================================================================================================== 00:23:30.202 [2024-11-08T04:07:05.313Z] Total : 8328.98 1041.12 0.00 0.00 1918.08 491.52 11617.75 00:23:30.202 0 00:23:30.202 04:07:05 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:30.202 04:07:05 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:30.202 04:07:05 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:30.202 04:07:05 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:30.202 | .driver_specific 00:23:30.202 | .nvme_error 00:23:30.202 | .status_code 00:23:30.202 | .command_transient_transport_error' 00:23:30.461 04:07:05 -- host/digest.sh@71 -- # (( 537 > 0 )) 00:23:30.461 04:07:05 -- host/digest.sh@73 -- # killprocess 87278 00:23:30.461 04:07:05 -- common/autotest_common.sh@936 -- # '[' -z 87278 ']' 00:23:30.461 04:07:05 -- common/autotest_common.sh@940 -- # kill -0 87278 00:23:30.461 04:07:05 -- common/autotest_common.sh@941 -- # uname 00:23:30.461 04:07:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:30.461 04:07:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87278 00:23:30.461 04:07:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:30.461 04:07:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:30.461 killing process with pid 87278 00:23:30.461 04:07:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87278' 00:23:30.461 Received shutdown signal, test time was about 2.000000 seconds 00:23:30.461 00:23:30.461 Latency(us) 00:23:30.461 [2024-11-08T04:07:05.572Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.461 [2024-11-08T04:07:05.572Z] =================================================================================================================== 00:23:30.461 [2024-11-08T04:07:05.572Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:30.461 04:07:05 -- common/autotest_common.sh@955 -- # kill 87278 00:23:30.461 04:07:05 -- common/autotest_common.sh@960 -- # wait 87278 00:23:30.720 04:07:05 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:23:30.720 04:07:05 -- host/digest.sh@54 -- # local rw bs qd 00:23:30.720 04:07:05 -- host/digest.sh@56 -- # rw=randwrite 00:23:30.720 04:07:05 -- host/digest.sh@56 -- # bs=4096 00:23:30.720 04:07:05 -- host/digest.sh@56 -- # qd=128 00:23:30.720 04:07:05 -- host/digest.sh@58 -- # bperfpid=87364 00:23:30.720 04:07:05 -- host/digest.sh@60 -- # waitforlisten 87364 /var/tmp/bperf.sock 00:23:30.720 04:07:05 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:23:30.720 04:07:05 -- common/autotest_common.sh@829 -- # '[' -z 87364 ']' 00:23:30.720 04:07:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:30.720 04:07:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:30.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:30.720 04:07:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:30.720 04:07:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:30.720 04:07:05 -- common/autotest_common.sh@10 -- # set +x 00:23:30.720 [2024-11-08 04:07:05.677743] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:30.720 [2024-11-08 04:07:05.677877] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87364 ] 00:23:30.720 [2024-11-08 04:07:05.813032] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.978 [2024-11-08 04:07:05.892441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:31.546 04:07:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:31.546 04:07:06 -- common/autotest_common.sh@862 -- # return 0 00:23:31.546 04:07:06 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:31.546 04:07:06 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:31.806 04:07:06 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:31.806 04:07:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.806 04:07:06 -- common/autotest_common.sh@10 -- # set +x 00:23:31.806 04:07:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.806 04:07:06 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:31.806 04:07:06 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:32.064 nvme0n1 00:23:32.324 04:07:07 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:32.324 04:07:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.324 04:07:07 -- common/autotest_common.sh@10 -- # set +x 00:23:32.324 04:07:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.324 04:07:07 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:32.324 04:07:07 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:32.324 Running I/O for 2 seconds... 00:23:32.324 [2024-11-08 04:07:07.280352] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f6890 00:23:32.324 [2024-11-08 04:07:07.280701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.324 [2024-11-08 04:07:07.280740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:32.324 [2024-11-08 04:07:07.289585] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e95a0 00:23:32.324 [2024-11-08 04:07:07.290279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.324 [2024-11-08 04:07:07.290327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:32.324 [2024-11-08 04:07:07.298670] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190ee190 00:23:32.324 [2024-11-08 04:07:07.299454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.324 [2024-11-08 04:07:07.299512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:32.324 [2024-11-08 04:07:07.307824] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190ed0b0 00:23:32.324 [2024-11-08 04:07:07.309146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.324 [2024-11-08 04:07:07.309177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:32.324 [2024-11-08 04:07:07.316787] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f3e60 00:23:32.324 [2024-11-08 04:07:07.317341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.324 [2024-11-08 04:07:07.317374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:32.324 [2024-11-08 04:07:07.325857] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e23b8 00:23:32.324 [2024-11-08 04:07:07.327064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.324 [2024-11-08 04:07:07.327095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:32.324 [2024-11-08 04:07:07.335527] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f6020 00:23:32.324 [2024-11-08 04:07:07.335874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.324 [2024-11-08 04:07:07.335908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:32.324 [2024-11-08 04:07:07.344533] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f0bc0 00:23:32.324 [2024-11-08 04:07:07.345025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.324 [2024-11-08 04:07:07.345060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:32.324 [2024-11-08 04:07:07.353508] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fc998 00:23:32.324 [2024-11-08 04:07:07.354028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.324 [2024-11-08 04:07:07.354062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:32.324 [2024-11-08 04:07:07.362457] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e01f8 00:23:32.324 [2024-11-08 04:07:07.363000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.324 [2024-11-08 04:07:07.363042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:32.324 [2024-11-08 04:07:07.370914] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190de8a8 00:23:32.324 [2024-11-08 04:07:07.371207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.324 [2024-11-08 04:07:07.371231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:32.324 [2024-11-08 04:07:07.382268] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e0630 00:23:32.324 [2024-11-08 04:07:07.382971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.324 [2024-11-08 04:07:07.383017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:32.324 [2024-11-08 04:07:07.389619] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fcdd0 00:23:32.324 [2024-11-08 04:07:07.390782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.324 [2024-11-08 04:07:07.390828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:32.324 [2024-11-08 04:07:07.398819] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e6300 00:23:32.324 [2024-11-08 04:07:07.399136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.324 [2024-11-08 04:07:07.399156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:32.324 [2024-11-08 04:07:07.407600] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f4f40 00:23:32.324 [2024-11-08 04:07:07.408443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.324 [2024-11-08 04:07:07.408478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:32.324 [2024-11-08 04:07:07.416432] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f4f40 00:23:32.324 [2024-11-08 04:07:07.416663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.324 [2024-11-08 04:07:07.416696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:32.324 [2024-11-08 04:07:07.427331] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e3498 00:23:32.324 [2024-11-08 04:07:07.428744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.324 [2024-11-08 04:07:07.428792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:32.583 [2024-11-08 04:07:07.436949] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190ebb98 00:23:32.583 [2024-11-08 04:07:07.437628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.583 [2024-11-08 04:07:07.437678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:32.583 [2024-11-08 04:07:07.444829] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f0bc0 00:23:32.583 [2024-11-08 04:07:07.445945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.583 [2024-11-08 04:07:07.446005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:32.583 [2024-11-08 04:07:07.454994] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190eff18 00:23:32.583 [2024-11-08 04:07:07.455672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.583 [2024-11-08 04:07:07.455717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:32.583 [2024-11-08 04:07:07.464220] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e6738 00:23:32.583 [2024-11-08 04:07:07.465291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.583 [2024-11-08 04:07:07.465336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.583 [2024-11-08 04:07:07.473045] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fdeb0 00:23:32.583 [2024-11-08 04:07:07.474651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.583 [2024-11-08 04:07:07.474701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.583 [2024-11-08 04:07:07.482626] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190df988 00:23:32.583 [2024-11-08 04:07:07.483307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.583 [2024-11-08 04:07:07.483353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.583 [2024-11-08 04:07:07.490375] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fa3a0 00:23:32.583 [2024-11-08 04:07:07.491411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.583 [2024-11-08 04:07:07.491465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.583 [2024-11-08 04:07:07.501222] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e4578 00:23:32.583 [2024-11-08 04:07:07.502283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.583 [2024-11-08 04:07:07.502328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:32.583 [2024-11-08 04:07:07.507950] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e7c50 00:23:32.584 [2024-11-08 04:07:07.508257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.584 [2024-11-08 04:07:07.508289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.584 [2024-11-08 04:07:07.517940] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f9b30 00:23:32.584 [2024-11-08 04:07:07.518663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.584 [2024-11-08 04:07:07.518709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:32.584 [2024-11-08 04:07:07.526936] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f5be8 00:23:32.584 [2024-11-08 04:07:07.528302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.584 [2024-11-08 04:07:07.528347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.584 [2024-11-08 04:07:07.535740] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e4de8 00:23:32.584 [2024-11-08 04:07:07.537092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.584 [2024-11-08 04:07:07.537137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:32.584 [2024-11-08 04:07:07.544961] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fe2e8 00:23:32.584 [2024-11-08 04:07:07.546492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.584 [2024-11-08 04:07:07.546548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:32.584 [2024-11-08 04:07:07.553084] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f6890 00:23:32.584 [2024-11-08 04:07:07.554176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.584 [2024-11-08 04:07:07.554223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:32.584 [2024-11-08 04:07:07.563773] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fda78 00:23:32.584 [2024-11-08 04:07:07.564696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.584 [2024-11-08 04:07:07.564739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:32.584 [2024-11-08 04:07:07.571737] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fc998 00:23:32.584 [2024-11-08 04:07:07.573254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.584 [2024-11-08 04:07:07.573300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.584 [2024-11-08 04:07:07.580460] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e95a0 00:23:32.584 [2024-11-08 04:07:07.581753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.584 [2024-11-08 04:07:07.581786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:32.584 [2024-11-08 04:07:07.589651] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190ed920 00:23:32.584 [2024-11-08 04:07:07.590177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.584 [2024-11-08 04:07:07.590211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:32.584 [2024-11-08 04:07:07.598608] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e0ea0 00:23:32.584 [2024-11-08 04:07:07.599329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.584 [2024-11-08 04:07:07.599375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:32.584 [2024-11-08 04:07:07.607463] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e99d8 00:23:32.584 [2024-11-08 04:07:07.608160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.584 [2024-11-08 04:07:07.608206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:32.584 [2024-11-08 04:07:07.616303] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190ebfd0 00:23:32.584 [2024-11-08 04:07:07.616989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.584 [2024-11-08 04:07:07.617037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:32.584 [2024-11-08 04:07:07.625161] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190feb58 00:23:32.584 [2024-11-08 04:07:07.625799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.584 [2024-11-08 04:07:07.625875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:32.584 [2024-11-08 04:07:07.634061] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f1868 00:23:32.584 [2024-11-08 04:07:07.634733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.584 [2024-11-08 04:07:07.634779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:32.584 [2024-11-08 04:07:07.643105] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fa3a0 00:23:32.584 [2024-11-08 04:07:07.643728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.584 [2024-11-08 04:07:07.643758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:32.584 [2024-11-08 04:07:07.652074] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190feb58 00:23:32.584 [2024-11-08 04:07:07.652728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.584 [2024-11-08 04:07:07.652789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:32.584 [2024-11-08 04:07:07.659941] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f5be8 00:23:32.584 [2024-11-08 04:07:07.660166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.584 [2024-11-08 04:07:07.660184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:32.584 [2024-11-08 04:07:07.669863] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fe720 00:23:32.584 [2024-11-08 04:07:07.670423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.584 [2024-11-08 04:07:07.670467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:32.584 [2024-11-08 04:07:07.678863] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190eaab8 00:23:32.584 [2024-11-08 04:07:07.680073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.584 [2024-11-08 04:07:07.680103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:32.584 [2024-11-08 04:07:07.687935] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f9b30 00:23:32.584 [2024-11-08 04:07:07.689280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.584 [2024-11-08 04:07:07.689328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:32.843 [2024-11-08 04:07:07.697008] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190ff3c8 00:23:32.843 [2024-11-08 04:07:07.697539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.843 [2024-11-08 04:07:07.697593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:32.843 [2024-11-08 04:07:07.704686] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f3e60 00:23:32.843 [2024-11-08 04:07:07.704922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.843 [2024-11-08 04:07:07.704944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:32.843 [2024-11-08 04:07:07.715627] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190ec408 00:23:32.843 [2024-11-08 04:07:07.716258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.843 [2024-11-08 04:07:07.716319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:32.843 [2024-11-08 04:07:07.724435] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fe2e8 00:23:32.843 [2024-11-08 04:07:07.725047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.843 [2024-11-08 04:07:07.725110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:32.843 [2024-11-08 04:07:07.733113] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f96f8 00:23:32.843 [2024-11-08 04:07:07.733880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.843 [2024-11-08 04:07:07.733940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:32.843 [2024-11-08 04:07:07.740653] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e23b8 00:23:32.843 [2024-11-08 04:07:07.740795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.843 [2024-11-08 04:07:07.740829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:32.843 [2024-11-08 04:07:07.749966] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f2948 00:23:32.843 [2024-11-08 04:07:07.750211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.843 [2024-11-08 04:07:07.750285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:32.843 [2024-11-08 04:07:07.759048] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e99d8 00:23:32.843 [2024-11-08 04:07:07.759274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.843 [2024-11-08 04:07:07.759313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:32.844 [2024-11-08 04:07:07.769524] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f9b30 00:23:32.844 [2024-11-08 04:07:07.770885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.844 [2024-11-08 04:07:07.770914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.844 [2024-11-08 04:07:07.778744] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190ecc78 00:23:32.844 [2024-11-08 04:07:07.780017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.844 [2024-11-08 04:07:07.780047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.844 [2024-11-08 04:07:07.787695] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e6738 00:23:32.844 [2024-11-08 04:07:07.789200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.844 [2024-11-08 04:07:07.789231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:32.844 [2024-11-08 04:07:07.795444] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e5a90 00:23:32.844 [2024-11-08 04:07:07.796436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.844 [2024-11-08 04:07:07.796490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:32.844 [2024-11-08 04:07:07.805492] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fb8b8 00:23:32.844 [2024-11-08 04:07:07.806078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.844 [2024-11-08 04:07:07.806109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:32.844 [2024-11-08 04:07:07.814460] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fe720 00:23:32.844 [2024-11-08 04:07:07.815200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.844 [2024-11-08 04:07:07.815246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.844 [2024-11-08 04:07:07.823287] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f57b0 00:23:32.844 [2024-11-08 04:07:07.824018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.844 [2024-11-08 04:07:07.824063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:32.844 [2024-11-08 04:07:07.832147] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190edd58 00:23:32.844 [2024-11-08 04:07:07.832884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.844 [2024-11-08 04:07:07.832930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:32.844 [2024-11-08 04:07:07.840981] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190ee5c8 00:23:32.844 [2024-11-08 04:07:07.841716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.844 [2024-11-08 04:07:07.841764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:32.844 [2024-11-08 04:07:07.849882] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e6738 00:23:32.844 [2024-11-08 04:07:07.850515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.844 [2024-11-08 04:07:07.850587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:32.844 [2024-11-08 04:07:07.858853] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e3060 00:23:32.844 [2024-11-08 04:07:07.859603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.844 [2024-11-08 04:07:07.859648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:32.844 [2024-11-08 04:07:07.867125] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e23b8 00:23:32.844 [2024-11-08 04:07:07.868657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.844 [2024-11-08 04:07:07.868703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:32.844 [2024-11-08 04:07:07.876067] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190ff3c8 00:23:32.844 [2024-11-08 04:07:07.877790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.844 [2024-11-08 04:07:07.877869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:32.844 [2024-11-08 04:07:07.885002] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fac10 00:23:32.844 [2024-11-08 04:07:07.886660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.844 [2024-11-08 04:07:07.886717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:32.844 [2024-11-08 04:07:07.893945] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f0ff8 00:23:32.844 [2024-11-08 04:07:07.895530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.844 [2024-11-08 04:07:07.895559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:32.844 [2024-11-08 04:07:07.902870] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e0ea0 00:23:32.844 [2024-11-08 04:07:07.904435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.844 [2024-11-08 04:07:07.904489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:32.844 [2024-11-08 04:07:07.912554] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190eee38 00:23:32.844 [2024-11-08 04:07:07.913582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.844 [2024-11-08 04:07:07.913629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.844 [2024-11-08 04:07:07.921304] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e1b48 00:23:32.844 [2024-11-08 04:07:07.922758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.844 [2024-11-08 04:07:07.922788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:32.844 [2024-11-08 04:07:07.930227] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fb048 00:23:32.844 [2024-11-08 04:07:07.931069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.844 [2024-11-08 04:07:07.931097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:32.844 [2024-11-08 04:07:07.938906] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190ebb98 00:23:32.844 [2024-11-08 04:07:07.939831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.844 [2024-11-08 04:07:07.939859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:32.844 [2024-11-08 04:07:07.948424] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fe720 00:23:32.844 [2024-11-08 04:07:07.948871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.844 [2024-11-08 04:07:07.948921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:33.104 [2024-11-08 04:07:07.956785] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fac10 00:23:33.104 [2024-11-08 04:07:07.957954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.104 [2024-11-08 04:07:07.957990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:33.104 [2024-11-08 04:07:07.966999] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f4b08 00:23:33.104 [2024-11-08 04:07:07.967562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.104 [2024-11-08 04:07:07.967608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:33.104 [2024-11-08 04:07:07.976024] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fbcf0 00:23:33.104 [2024-11-08 04:07:07.976786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.104 [2024-11-08 04:07:07.976817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.104 [2024-11-08 04:07:07.984848] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e99d8 00:23:33.104 [2024-11-08 04:07:07.985608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.104 [2024-11-08 04:07:07.985656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:33.104 [2024-11-08 04:07:07.993781] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e88f8 00:23:33.104 [2024-11-08 04:07:07.994490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.104 [2024-11-08 04:07:07.994545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:33.104 [2024-11-08 04:07:08.002663] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fc560 00:23:33.104 [2024-11-08 04:07:08.003336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.104 [2024-11-08 04:07:08.003381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:33.104 [2024-11-08 04:07:08.011502] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f3e60 00:23:33.104 [2024-11-08 04:07:08.012138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.104 [2024-11-08 04:07:08.012201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:33.104 [2024-11-08 04:07:08.020332] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f20d8 00:23:33.104 [2024-11-08 04:07:08.020996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.104 [2024-11-08 04:07:08.021058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:33.104 [2024-11-08 04:07:08.029166] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190eee38 00:23:33.104 [2024-11-08 04:07:08.029908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.104 [2024-11-08 04:07:08.029955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:33.104 [2024-11-08 04:07:08.037066] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e27f0 00:23:33.104 [2024-11-08 04:07:08.037353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.104 [2024-11-08 04:07:08.037398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:33.104 [2024-11-08 04:07:08.047007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f1ca0 00:23:33.104 [2024-11-08 04:07:08.047610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.104 [2024-11-08 04:07:08.047686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:33.104 [2024-11-08 04:07:08.056054] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190df988 00:23:33.104 [2024-11-08 04:07:08.057278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.104 [2024-11-08 04:07:08.057307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:33.104 [2024-11-08 04:07:08.064864] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190eaab8 00:23:33.104 [2024-11-08 04:07:08.066200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.104 [2024-11-08 04:07:08.066229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:33.104 [2024-11-08 04:07:08.074078] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f31b8 00:23:33.104 [2024-11-08 04:07:08.075317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.104 [2024-11-08 04:07:08.075346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:33.104 [2024-11-08 04:07:08.082200] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190ecc78 00:23:33.104 [2024-11-08 04:07:08.083207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.104 [2024-11-08 04:07:08.083236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:33.104 [2024-11-08 04:07:08.090879] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190ecc78 00:23:33.104 [2024-11-08 04:07:08.092122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.104 [2024-11-08 04:07:08.092153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:33.104 [2024-11-08 04:07:08.101689] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e5658 00:23:33.104 [2024-11-08 04:07:08.102334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.104 [2024-11-08 04:07:08.102397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:33.104 [2024-11-08 04:07:08.109290] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190ea248 00:23:33.104 [2024-11-08 04:07:08.110557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.104 [2024-11-08 04:07:08.110596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:33.104 [2024-11-08 04:07:08.118173] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e4140 00:23:33.104 [2024-11-08 04:07:08.118423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.104 [2024-11-08 04:07:08.118457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:33.104 [2024-11-08 04:07:08.127303] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190df988 00:23:33.104 [2024-11-08 04:07:08.128001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.104 [2024-11-08 04:07:08.128046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:33.104 [2024-11-08 04:07:08.136216] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190df988 00:23:33.104 [2024-11-08 04:07:08.137349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.104 [2024-11-08 04:07:08.137379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:33.104 [2024-11-08 04:07:08.145916] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190df988 00:23:33.105 [2024-11-08 04:07:08.147320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.105 [2024-11-08 04:07:08.147349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:33.105 [2024-11-08 04:07:08.154818] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fe720 00:23:33.105 [2024-11-08 04:07:08.155701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.105 [2024-11-08 04:07:08.155729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:33.105 [2024-11-08 04:07:08.163904] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e73e0 00:23:33.105 [2024-11-08 04:07:08.165291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.105 [2024-11-08 04:07:08.165321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:33.105 [2024-11-08 04:07:08.174020] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190ddc00 00:23:33.105 [2024-11-08 04:07:08.174988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.105 [2024-11-08 04:07:08.175016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:33.105 [2024-11-08 04:07:08.181692] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e5ec8 00:23:33.105 [2024-11-08 04:07:08.182848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.105 [2024-11-08 04:07:08.182876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:33.105 [2024-11-08 04:07:08.191257] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190ef6a8 00:23:33.105 [2024-11-08 04:07:08.191880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.105 [2024-11-08 04:07:08.191926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:33.105 [2024-11-08 04:07:08.198548] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e8088 00:23:33.105 [2024-11-08 04:07:08.199623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.105 [2024-11-08 04:07:08.199651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:33.105 [2024-11-08 04:07:08.207371] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e6b70 00:23:33.105 [2024-11-08 04:07:08.208419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.105 [2024-11-08 04:07:08.208472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:33.364 [2024-11-08 04:07:08.218975] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e8088 00:23:33.364 [2024-11-08 04:07:08.219857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.364 [2024-11-08 04:07:08.219886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:33.364 [2024-11-08 04:07:08.225637] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f8618 00:23:33.364 [2024-11-08 04:07:08.225775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.364 [2024-11-08 04:07:08.225794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:33.364 [2024-11-08 04:07:08.236706] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190ee190 00:23:33.364 [2024-11-08 04:07:08.237372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:9256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.364 [2024-11-08 04:07:08.237440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:33.364 [2024-11-08 04:07:08.244584] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f6458 00:23:33.364 [2024-11-08 04:07:08.245456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.364 [2024-11-08 04:07:08.245517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:33.364 [2024-11-08 04:07:08.254182] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190eff18 00:23:33.364 [2024-11-08 04:07:08.254597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.364 [2024-11-08 04:07:08.254630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:33.364 [2024-11-08 04:07:08.263113] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f96f8 00:23:33.364 [2024-11-08 04:07:08.263730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.364 [2024-11-08 04:07:08.263762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:33.364 [2024-11-08 04:07:08.272250] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e8088 00:23:33.364 [2024-11-08 04:07:08.272810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.364 [2024-11-08 04:07:08.272843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:33.364 [2024-11-08 04:07:08.281105] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e12d8 00:23:33.365 [2024-11-08 04:07:08.281705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.365 [2024-11-08 04:07:08.281734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:33.365 [2024-11-08 04:07:08.289955] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f46d0 00:23:33.365 [2024-11-08 04:07:08.290665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.365 [2024-11-08 04:07:08.290709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:33.365 [2024-11-08 04:07:08.298830] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f1868 00:23:33.365 [2024-11-08 04:07:08.299589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.365 [2024-11-08 04:07:08.299635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:33.365 [2024-11-08 04:07:08.307744] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f57b0 00:23:33.365 [2024-11-08 04:07:08.308480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.365 [2024-11-08 04:07:08.308535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:33.365 [2024-11-08 04:07:08.316673] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f3a28 00:23:33.365 [2024-11-08 04:07:08.317400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.365 [2024-11-08 04:07:08.317455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:33.365 [2024-11-08 04:07:08.325615] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e0a68 00:23:33.365 [2024-11-08 04:07:08.326421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.365 [2024-11-08 04:07:08.326506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:33.365 [2024-11-08 04:07:08.334551] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190dfdc0 00:23:33.365 [2024-11-08 04:07:08.335383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.365 [2024-11-08 04:07:08.335411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:33.365 [2024-11-08 04:07:08.344011] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190ebfd0 00:23:33.365 [2024-11-08 04:07:08.344615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.365 [2024-11-08 04:07:08.344674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:33.365 [2024-11-08 04:07:08.351854] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f1ca0 00:23:33.365 [2024-11-08 04:07:08.352794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.365 [2024-11-08 04:07:08.352822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:33.365 [2024-11-08 04:07:08.360922] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fb8b8 00:23:33.365 [2024-11-08 04:07:08.361273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.365 [2024-11-08 04:07:08.361310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:33.365 [2024-11-08 04:07:08.372019] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fb048 00:23:33.365 [2024-11-08 04:07:08.372899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.365 [2024-11-08 04:07:08.372927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:33.365 [2024-11-08 04:07:08.379747] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f96f8 00:23:33.365 [2024-11-08 04:07:08.380856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.365 [2024-11-08 04:07:08.380885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:33.365 [2024-11-08 04:07:08.388220] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f6890 00:23:33.365 [2024-11-08 04:07:08.388604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.365 [2024-11-08 04:07:08.388637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:33.365 [2024-11-08 04:07:08.397236] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f2948 00:23:33.365 [2024-11-08 04:07:08.398227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.365 [2024-11-08 04:07:08.398257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:33.365 [2024-11-08 04:07:08.406045] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f6458 00:23:33.365 [2024-11-08 04:07:08.407112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.365 [2024-11-08 04:07:08.407141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:33.365 [2024-11-08 04:07:08.415282] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190ef270 00:23:33.365 [2024-11-08 04:07:08.416546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.365 [2024-11-08 04:07:08.416575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:33.365 [2024-11-08 04:07:08.425211] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fcdd0 00:23:33.365 [2024-11-08 04:07:08.426475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.365 [2024-11-08 04:07:08.426513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:33.365 [2024-11-08 04:07:08.434198] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190ec840 00:23:33.365 [2024-11-08 04:07:08.435221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.365 [2024-11-08 04:07:08.435250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:33.365 [2024-11-08 04:07:08.442983] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fc128 00:23:33.365 [2024-11-08 04:07:08.444403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.365 [2024-11-08 04:07:08.444443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:33.365 [2024-11-08 04:07:08.452503] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190eaef0 00:23:33.365 [2024-11-08 04:07:08.453269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.365 [2024-11-08 04:07:08.453293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:33.365 [2024-11-08 04:07:08.459990] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190ea680 00:23:33.365 [2024-11-08 04:07:08.461121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.365 [2024-11-08 04:07:08.461147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:33.365 [2024-11-08 04:07:08.469526] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190ed0b0 00:23:33.365 [2024-11-08 04:07:08.470069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.365 [2024-11-08 04:07:08.470150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:33.625 [2024-11-08 04:07:08.478546] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fb480 00:23:33.625 [2024-11-08 04:07:08.479620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.625 [2024-11-08 04:07:08.479645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:33.625 [2024-11-08 04:07:08.487489] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f7970 00:23:33.625 [2024-11-08 04:07:08.488446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.625 [2024-11-08 04:07:08.488628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:33.625 [2024-11-08 04:07:08.497047] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190ed0b0 00:23:33.625 [2024-11-08 04:07:08.497556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.625 [2024-11-08 04:07:08.497604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:33.625 [2024-11-08 04:07:08.505343] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190df988 00:23:33.625 [2024-11-08 04:07:08.506444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.625 [2024-11-08 04:07:08.506497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:33.625 [2024-11-08 04:07:08.515423] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f81e0 00:23:33.625 [2024-11-08 04:07:08.516020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.625 [2024-11-08 04:07:08.516200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:33.625 [2024-11-08 04:07:08.523138] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fa3a0 00:23:33.625 [2024-11-08 04:07:08.524207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.625 [2024-11-08 04:07:08.524387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:33.625 [2024-11-08 04:07:08.532073] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e73e0 00:23:33.625 [2024-11-08 04:07:08.532427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.625 [2024-11-08 04:07:08.532507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:33.625 [2024-11-08 04:07:08.540746] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190efae0 00:23:33.625 [2024-11-08 04:07:08.540942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.625 [2024-11-08 04:07:08.540962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:33.625 [2024-11-08 04:07:08.551408] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e4140 00:23:33.625 [2024-11-08 04:07:08.552895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.625 [2024-11-08 04:07:08.552926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.625 [2024-11-08 04:07:08.561272] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f6890 00:23:33.625 [2024-11-08 04:07:08.562229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.625 [2024-11-08 04:07:08.562375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:33.625 [2024-11-08 04:07:08.567700] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f3e60 00:23:33.625 [2024-11-08 04:07:08.567782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.625 [2024-11-08 04:07:08.567801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:33.625 [2024-11-08 04:07:08.576667] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f5378 00:23:33.625 [2024-11-08 04:07:08.576879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.625 [2024-11-08 04:07:08.576897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:33.625 [2024-11-08 04:07:08.586899] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f3e60 00:23:33.625 [2024-11-08 04:07:08.588457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.625 [2024-11-08 04:07:08.588496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.625 [2024-11-08 04:07:08.596229] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f5378 00:23:33.625 [2024-11-08 04:07:08.597574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.625 [2024-11-08 04:07:08.597744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:33.625 [2024-11-08 04:07:08.604392] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e3d08 00:23:33.625 [2024-11-08 04:07:08.605432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.625 [2024-11-08 04:07:08.605657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:33.625 [2024-11-08 04:07:08.614073] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190eb760 00:23:33.625 [2024-11-08 04:07:08.615801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.625 [2024-11-08 04:07:08.615860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:33.625 [2024-11-08 04:07:08.622572] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f20d8 00:23:33.625 [2024-11-08 04:07:08.624632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.625 [2024-11-08 04:07:08.624831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:33.625 [2024-11-08 04:07:08.632990] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fd208 00:23:33.625 [2024-11-08 04:07:08.633557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.625 [2024-11-08 04:07:08.633599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:33.625 [2024-11-08 04:07:08.642898] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f46d0 00:23:33.625 [2024-11-08 04:07:08.643841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.625 [2024-11-08 04:07:08.643983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:33.625 [2024-11-08 04:07:08.651503] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f9b30 00:23:33.625 [2024-11-08 04:07:08.652405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.625 [2024-11-08 04:07:08.652474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:33.625 [2024-11-08 04:07:08.660327] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fd640 00:23:33.625 [2024-11-08 04:07:08.661591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.625 [2024-11-08 04:07:08.661624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:33.625 [2024-11-08 04:07:08.669217] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e0a68 00:23:33.625 [2024-11-08 04:07:08.670654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.625 [2024-11-08 04:07:08.670685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:33.625 [2024-11-08 04:07:08.680196] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fa7d8 00:23:33.625 [2024-11-08 04:07:08.681125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.625 [2024-11-08 04:07:08.681156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:33.625 [2024-11-08 04:07:08.686704] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e95a0 00:23:33.625 [2024-11-08 04:07:08.686899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.626 [2024-11-08 04:07:08.686918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:33.626 [2024-11-08 04:07:08.697073] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fb048 00:23:33.626 [2024-11-08 04:07:08.698350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.626 [2024-11-08 04:07:08.698516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:33.626 [2024-11-08 04:07:08.706055] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190eb328 00:23:33.626 [2024-11-08 04:07:08.706633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.626 [2024-11-08 04:07:08.706662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:33.626 [2024-11-08 04:07:08.715213] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f2d80 00:23:33.626 [2024-11-08 04:07:08.716846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.626 [2024-11-08 04:07:08.716879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.626 [2024-11-08 04:07:08.723973] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e8d30 00:23:33.626 [2024-11-08 04:07:08.725384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.626 [2024-11-08 04:07:08.725439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:33.885 [2024-11-08 04:07:08.733452] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fda78 00:23:33.885 [2024-11-08 04:07:08.734262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.885 [2024-11-08 04:07:08.734294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:33.885 [2024-11-08 04:07:08.741617] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f4b08 00:23:33.885 [2024-11-08 04:07:08.741929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.885 [2024-11-08 04:07:08.741963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:33.885 [2024-11-08 04:07:08.751443] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e8088 00:23:33.885 [2024-11-08 04:07:08.752360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.885 [2024-11-08 04:07:08.752393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:33.885 [2024-11-08 04:07:08.760970] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e1710 00:23:33.885 [2024-11-08 04:07:08.762219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.885 [2024-11-08 04:07:08.762252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.885 [2024-11-08 04:07:08.770927] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e88f8 00:23:33.885 [2024-11-08 04:07:08.772305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.885 [2024-11-08 04:07:08.772509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.885 [2024-11-08 04:07:08.779492] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190ef6a8 00:23:33.885 [2024-11-08 04:07:08.781233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.885 [2024-11-08 04:07:08.781263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:33.885 [2024-11-08 04:07:08.789226] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f4b08 00:23:33.885 [2024-11-08 04:07:08.789681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.885 [2024-11-08 04:07:08.789712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:33.885 [2024-11-08 04:07:08.798060] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fd640 00:23:33.885 [2024-11-08 04:07:08.798612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.885 [2024-11-08 04:07:08.798636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:33.885 [2024-11-08 04:07:08.806941] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f3e60 00:23:33.885 [2024-11-08 04:07:08.807534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.886 [2024-11-08 04:07:08.807562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:33.886 [2024-11-08 04:07:08.815897] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fda78 00:23:33.886 [2024-11-08 04:07:08.816521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.886 [2024-11-08 04:07:08.816553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:33.886 [2024-11-08 04:07:08.824826] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f7da8 00:23:33.886 [2024-11-08 04:07:08.826268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.886 [2024-11-08 04:07:08.826300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:33.886 [2024-11-08 04:07:08.833588] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f0350 00:23:33.886 [2024-11-08 04:07:08.834722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.886 [2024-11-08 04:07:08.834745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:33.886 [2024-11-08 04:07:08.842974] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e6b70 00:23:33.886 [2024-11-08 04:07:08.843333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.886 [2024-11-08 04:07:08.843357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:33.886 [2024-11-08 04:07:08.851942] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e3d08 00:23:33.886 [2024-11-08 04:07:08.852497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.886 [2024-11-08 04:07:08.852568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:33.886 [2024-11-08 04:07:08.861320] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e3d08 00:23:33.886 [2024-11-08 04:07:08.862583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.886 [2024-11-08 04:07:08.862610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:33.886 [2024-11-08 04:07:08.869882] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fc560 00:23:33.886 [2024-11-08 04:07:08.871086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.886 [2024-11-08 04:07:08.871118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:33.886 [2024-11-08 04:07:08.878997] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e5ec8 00:23:33.886 [2024-11-08 04:07:08.879758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.886 [2024-11-08 04:07:08.879792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:33.886 [2024-11-08 04:07:08.888487] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fa7d8 00:23:33.886 [2024-11-08 04:07:08.889418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.886 [2024-11-08 04:07:08.889505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.886 [2024-11-08 04:07:08.899340] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fc998 00:23:33.886 [2024-11-08 04:07:08.900289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.886 [2024-11-08 04:07:08.900320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:33.886 [2024-11-08 04:07:08.906112] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e95a0 00:23:33.886 [2024-11-08 04:07:08.906323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.886 [2024-11-08 04:07:08.906345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:33.886 [2024-11-08 04:07:08.916069] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e27f0 00:23:33.886 [2024-11-08 04:07:08.916981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.886 [2024-11-08 04:07:08.917007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:33.886 [2024-11-08 04:07:08.925030] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f4f40 00:23:33.886 [2024-11-08 04:07:08.925373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.886 [2024-11-08 04:07:08.925394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:33.886 [2024-11-08 04:07:08.934191] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f9f68 00:23:33.886 [2024-11-08 04:07:08.935256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.886 [2024-11-08 04:07:08.935287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:33.886 [2024-11-08 04:07:08.942186] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f7da8 00:23:33.886 [2024-11-08 04:07:08.942552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.886 [2024-11-08 04:07:08.942581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:33.886 [2024-11-08 04:07:08.951738] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e7818 00:23:33.886 [2024-11-08 04:07:08.952741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.886 [2024-11-08 04:07:08.952771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:33.886 [2024-11-08 04:07:08.960987] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f0788 00:23:33.886 [2024-11-08 04:07:08.961472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.886 [2024-11-08 04:07:08.961550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:33.886 [2024-11-08 04:07:08.970473] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f0788 00:23:33.886 [2024-11-08 04:07:08.971332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.886 [2024-11-08 04:07:08.971362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:33.886 [2024-11-08 04:07:08.979199] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190ddc00 00:23:33.886 [2024-11-08 04:07:08.980213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.886 [2024-11-08 04:07:08.980258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:33.886 [2024-11-08 04:07:08.988811] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e1b48 00:23:33.886 [2024-11-08 04:07:08.989392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.886 [2024-11-08 04:07:08.989452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:34.146 [2024-11-08 04:07:08.996609] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190ea248 00:23:34.146 [2024-11-08 04:07:08.997724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.146 [2024-11-08 04:07:08.997982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:34.146 [2024-11-08 04:07:09.006719] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fac10 00:23:34.146 [2024-11-08 04:07:09.008041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.146 [2024-11-08 04:07:09.008072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:34.146 [2024-11-08 04:07:09.014629] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f7538 00:23:34.146 [2024-11-08 04:07:09.015671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.146 [2024-11-08 04:07:09.015701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:34.146 [2024-11-08 04:07:09.023040] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190ee5c8 00:23:34.146 [2024-11-08 04:07:09.023563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.146 [2024-11-08 04:07:09.023587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:34.146 [2024-11-08 04:07:09.031777] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e8d30 00:23:34.146 [2024-11-08 04:07:09.032971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.146 [2024-11-08 04:07:09.032999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:34.146 [2024-11-08 04:07:09.042757] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e0ea0 00:23:34.146 [2024-11-08 04:07:09.043384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.146 [2024-11-08 04:07:09.043411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:34.146 [2024-11-08 04:07:09.051770] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e6b70 00:23:34.146 [2024-11-08 04:07:09.053136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.146 [2024-11-08 04:07:09.053166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:34.146 [2024-11-08 04:07:09.062613] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e5ec8 00:23:34.146 [2024-11-08 04:07:09.063589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.146 [2024-11-08 04:07:09.063617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:34.146 [2024-11-08 04:07:09.069012] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190df118 00:23:34.146 [2024-11-08 04:07:09.069133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.146 [2024-11-08 04:07:09.069150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:34.146 [2024-11-08 04:07:09.078223] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f2d80 00:23:34.146 [2024-11-08 04:07:09.078926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:11300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.146 [2024-11-08 04:07:09.078952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:34.146 [2024-11-08 04:07:09.087426] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e3d08 00:23:34.146 [2024-11-08 04:07:09.087930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.146 [2024-11-08 04:07:09.087962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:34.146 [2024-11-08 04:07:09.096288] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fda78 00:23:34.146 [2024-11-08 04:07:09.096575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.146 [2024-11-08 04:07:09.096612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:34.146 [2024-11-08 04:07:09.105172] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e0630 00:23:34.146 [2024-11-08 04:07:09.105385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.146 [2024-11-08 04:07:09.105403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:34.146 [2024-11-08 04:07:09.115617] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f6458 00:23:34.146 [2024-11-08 04:07:09.116883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.146 [2024-11-08 04:07:09.116915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.146 [2024-11-08 04:07:09.124548] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f46d0 00:23:34.146 [2024-11-08 04:07:09.126103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.146 [2024-11-08 04:07:09.126129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.146 [2024-11-08 04:07:09.133319] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190ed920 00:23:34.146 [2024-11-08 04:07:09.134797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.146 [2024-11-08 04:07:09.134828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:34.146 [2024-11-08 04:07:09.142534] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f20d8 00:23:34.146 [2024-11-08 04:07:09.142994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.146 [2024-11-08 04:07:09.143035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:34.146 [2024-11-08 04:07:09.151475] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e9168 00:23:34.146 [2024-11-08 04:07:09.152368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.146 [2024-11-08 04:07:09.152398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:34.146 [2024-11-08 04:07:09.160441] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190feb58 00:23:34.146 [2024-11-08 04:07:09.161060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.146 [2024-11-08 04:07:09.161092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:34.146 [2024-11-08 04:07:09.168114] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f8618 00:23:34.146 [2024-11-08 04:07:09.168295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.146 [2024-11-08 04:07:09.168313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:34.146 [2024-11-08 04:07:09.179157] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f5be8 00:23:34.146 [2024-11-08 04:07:09.179885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.146 [2024-11-08 04:07:09.179910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:34.146 [2024-11-08 04:07:09.186691] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e5220 00:23:34.146 [2024-11-08 04:07:09.187629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.146 [2024-11-08 04:07:09.187660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:34.146 [2024-11-08 04:07:09.195966] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190e99d8 00:23:34.146 [2024-11-08 04:07:09.196162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.146 [2024-11-08 04:07:09.196180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:34.146 [2024-11-08 04:07:09.204903] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190eb328 00:23:34.146 [2024-11-08 04:07:09.205235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.146 [2024-11-08 04:07:09.205257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:34.147 [2024-11-08 04:07:09.213761] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f7970 00:23:34.147 [2024-11-08 04:07:09.214218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.147 [2024-11-08 04:07:09.214246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:34.147 [2024-11-08 04:07:09.222793] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fc560 00:23:34.147 [2024-11-08 04:07:09.223075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.147 [2024-11-08 04:07:09.223098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:34.147 [2024-11-08 04:07:09.231649] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f6458 00:23:34.147 [2024-11-08 04:07:09.231905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.147 [2024-11-08 04:07:09.231927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:34.147 [2024-11-08 04:07:09.240487] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190f9b30 00:23:34.147 [2024-11-08 04:07:09.240863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.147 [2024-11-08 04:07:09.240882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:34.147 [2024-11-08 04:07:09.249510] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190ed0b0 00:23:34.147 [2024-11-08 04:07:09.249904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.147 [2024-11-08 04:07:09.249968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:34.405 [2024-11-08 04:07:09.258535] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190fb8b8 00:23:34.405 [2024-11-08 04:07:09.259821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.405 [2024-11-08 04:07:09.259851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:34.405 [2024-11-08 04:07:09.267167] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c08f0) with pdu=0x2000190ebb98 00:23:34.405 [2024-11-08 04:07:09.268240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.405 [2024-11-08 04:07:09.268271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:34.405 00:23:34.405 Latency(us) 00:23:34.405 [2024-11-08T04:07:09.516Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.405 [2024-11-08T04:07:09.516Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:34.405 nvme0n1 : 2.00 28207.21 110.18 0.00 0.00 4533.71 1854.37 11319.85 00:23:34.405 [2024-11-08T04:07:09.516Z] =================================================================================================================== 00:23:34.405 [2024-11-08T04:07:09.516Z] Total : 28207.21 110.18 0.00 0.00 4533.71 1854.37 11319.85 00:23:34.405 0 00:23:34.405 04:07:09 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:34.405 04:07:09 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:34.406 | .driver_specific 00:23:34.406 | .nvme_error 00:23:34.406 | .status_code 00:23:34.406 | .command_transient_transport_error' 00:23:34.406 04:07:09 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:34.406 04:07:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:34.664 04:07:09 -- host/digest.sh@71 -- # (( 221 > 0 )) 00:23:34.664 04:07:09 -- host/digest.sh@73 -- # killprocess 87364 00:23:34.664 04:07:09 -- common/autotest_common.sh@936 -- # '[' -z 87364 ']' 00:23:34.664 04:07:09 -- common/autotest_common.sh@940 -- # kill -0 87364 00:23:34.664 04:07:09 -- common/autotest_common.sh@941 -- # uname 00:23:34.664 04:07:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:34.664 04:07:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87364 00:23:34.664 killing process with pid 87364 00:23:34.664 Received shutdown signal, test time was about 2.000000 seconds 00:23:34.664 00:23:34.665 Latency(us) 00:23:34.665 [2024-11-08T04:07:09.776Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.665 [2024-11-08T04:07:09.776Z] =================================================================================================================== 00:23:34.665 [2024-11-08T04:07:09.776Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:34.665 04:07:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:34.665 04:07:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:34.665 04:07:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87364' 00:23:34.665 04:07:09 -- common/autotest_common.sh@955 -- # kill 87364 00:23:34.665 04:07:09 -- common/autotest_common.sh@960 -- # wait 87364 00:23:34.923 04:07:09 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:23:34.923 04:07:09 -- host/digest.sh@54 -- # local rw bs qd 00:23:34.923 04:07:09 -- host/digest.sh@56 -- # rw=randwrite 00:23:34.923 04:07:09 -- host/digest.sh@56 -- # bs=131072 00:23:34.923 04:07:09 -- host/digest.sh@56 -- # qd=16 00:23:34.923 04:07:09 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:23:34.923 04:07:09 -- host/digest.sh@58 -- # bperfpid=87455 00:23:34.923 04:07:09 -- host/digest.sh@60 -- # waitforlisten 87455 /var/tmp/bperf.sock 00:23:34.923 04:07:09 -- common/autotest_common.sh@829 -- # '[' -z 87455 ']' 00:23:34.923 04:07:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:34.923 04:07:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:34.923 04:07:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:34.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:34.923 04:07:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:34.923 04:07:09 -- common/autotest_common.sh@10 -- # set +x 00:23:34.923 [2024-11-08 04:07:09.868918] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:34.923 [2024-11-08 04:07:09.869177] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87455 ] 00:23:34.923 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:34.923 Zero copy mechanism will not be used. 00:23:34.923 [2024-11-08 04:07:10.001760] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.182 [2024-11-08 04:07:10.100664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:36.117 04:07:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:36.117 04:07:10 -- common/autotest_common.sh@862 -- # return 0 00:23:36.117 04:07:10 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:36.117 04:07:10 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:36.117 04:07:11 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:36.117 04:07:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.117 04:07:11 -- common/autotest_common.sh@10 -- # set +x 00:23:36.117 04:07:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.117 04:07:11 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:36.117 04:07:11 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:36.375 nvme0n1 00:23:36.375 04:07:11 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:36.375 04:07:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.375 04:07:11 -- common/autotest_common.sh@10 -- # set +x 00:23:36.375 04:07:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.375 04:07:11 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:36.375 04:07:11 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:36.635 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:36.635 Zero copy mechanism will not be used. 00:23:36.635 Running I/O for 2 seconds... 00:23:36.635 [2024-11-08 04:07:11.493669] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.635 [2024-11-08 04:07:11.494135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.636 [2024-11-08 04:07:11.494175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.636 [2024-11-08 04:07:11.498193] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.636 [2024-11-08 04:07:11.498389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.636 [2024-11-08 04:07:11.498410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.636 [2024-11-08 04:07:11.502170] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.636 [2024-11-08 04:07:11.502309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.636 [2024-11-08 04:07:11.502329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.636 [2024-11-08 04:07:11.506192] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.636 [2024-11-08 04:07:11.506317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.636 [2024-11-08 04:07:11.506337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.636 [2024-11-08 04:07:11.510248] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.636 [2024-11-08 04:07:11.510351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.636 [2024-11-08 04:07:11.510371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.636 [2024-11-08 04:07:11.514356] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.636 [2024-11-08 04:07:11.514454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.636 [2024-11-08 04:07:11.514475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.636 [2024-11-08 04:07:11.518524] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.636 [2024-11-08 04:07:11.518727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.636 [2024-11-08 04:07:11.518747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.636 [2024-11-08 04:07:11.522742] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.636 [2024-11-08 04:07:11.522904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.636 [2024-11-08 04:07:11.522924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.636 [2024-11-08 04:07:11.526798] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.636 [2024-11-08 04:07:11.527031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.636 [2024-11-08 04:07:11.527052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.636 [2024-11-08 04:07:11.530918] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.636 [2024-11-08 04:07:11.531007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.636 [2024-11-08 04:07:11.531027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.636 [2024-11-08 04:07:11.535055] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.636 [2024-11-08 04:07:11.535186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.636 [2024-11-08 04:07:11.535206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.636 [2024-11-08 04:07:11.539071] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.636 [2024-11-08 04:07:11.539182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.636 [2024-11-08 04:07:11.539203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.636 [2024-11-08 04:07:11.543117] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.636 [2024-11-08 04:07:11.543205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.636 [2024-11-08 04:07:11.543225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.636 [2024-11-08 04:07:11.547251] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.636 [2024-11-08 04:07:11.547367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.636 [2024-11-08 04:07:11.547387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.636 [2024-11-08 04:07:11.551379] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.636 [2024-11-08 04:07:11.551532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.636 [2024-11-08 04:07:11.551553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.636 [2024-11-08 04:07:11.555580] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.636 [2024-11-08 04:07:11.555751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.636 [2024-11-08 04:07:11.555772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.636 [2024-11-08 04:07:11.559673] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.636 [2024-11-08 04:07:11.559827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.636 [2024-11-08 04:07:11.559847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.636 [2024-11-08 04:07:11.563739] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.636 [2024-11-08 04:07:11.563936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.636 [2024-11-08 04:07:11.563955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.636 [2024-11-08 04:07:11.567738] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.636 [2024-11-08 04:07:11.567902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.636 [2024-11-08 04:07:11.567921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.636 [2024-11-08 04:07:11.571783] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.636 [2024-11-08 04:07:11.571913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.636 [2024-11-08 04:07:11.571933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.636 [2024-11-08 04:07:11.575854] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.636 [2024-11-08 04:07:11.575943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.636 [2024-11-08 04:07:11.575962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.636 [2024-11-08 04:07:11.580078] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.636 [2024-11-08 04:07:11.580203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.636 [2024-11-08 04:07:11.580223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.636 [2024-11-08 04:07:11.584301] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.636 [2024-11-08 04:07:11.584441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.636 [2024-11-08 04:07:11.584462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.636 [2024-11-08 04:07:11.588570] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.636 [2024-11-08 04:07:11.588742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.636 [2024-11-08 04:07:11.588761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.636 [2024-11-08 04:07:11.592618] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.636 [2024-11-08 04:07:11.592759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.637 [2024-11-08 04:07:11.592778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.637 [2024-11-08 04:07:11.596678] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.637 [2024-11-08 04:07:11.596843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.637 [2024-11-08 04:07:11.596862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.637 [2024-11-08 04:07:11.600675] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.637 [2024-11-08 04:07:11.600814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.637 [2024-11-08 04:07:11.600833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.637 [2024-11-08 04:07:11.604847] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.637 [2024-11-08 04:07:11.604965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.637 [2024-11-08 04:07:11.604985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.637 [2024-11-08 04:07:11.608927] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.637 [2024-11-08 04:07:11.609022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.637 [2024-11-08 04:07:11.609042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.637 [2024-11-08 04:07:11.613018] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.637 [2024-11-08 04:07:11.613132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.637 [2024-11-08 04:07:11.613152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.637 [2024-11-08 04:07:11.617200] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.637 [2024-11-08 04:07:11.617357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.637 [2024-11-08 04:07:11.617377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.637 [2024-11-08 04:07:11.621442] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.637 [2024-11-08 04:07:11.621659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.637 [2024-11-08 04:07:11.621681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.637 [2024-11-08 04:07:11.625602] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.637 [2024-11-08 04:07:11.625789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.637 [2024-11-08 04:07:11.625824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.637 [2024-11-08 04:07:11.629711] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.637 [2024-11-08 04:07:11.629850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.637 [2024-11-08 04:07:11.629870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.637 [2024-11-08 04:07:11.633767] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.637 [2024-11-08 04:07:11.633904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.637 [2024-11-08 04:07:11.633939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.637 [2024-11-08 04:07:11.637874] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.637 [2024-11-08 04:07:11.638005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.637 [2024-11-08 04:07:11.638024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.637 [2024-11-08 04:07:11.642015] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.637 [2024-11-08 04:07:11.642130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.637 [2024-11-08 04:07:11.642149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.637 [2024-11-08 04:07:11.646109] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.637 [2024-11-08 04:07:11.646244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.637 [2024-11-08 04:07:11.646263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.637 [2024-11-08 04:07:11.650232] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.637 [2024-11-08 04:07:11.650397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.637 [2024-11-08 04:07:11.650416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.637 [2024-11-08 04:07:11.654412] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.637 [2024-11-08 04:07:11.654591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.637 [2024-11-08 04:07:11.654609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.637 [2024-11-08 04:07:11.658560] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.637 [2024-11-08 04:07:11.658698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.637 [2024-11-08 04:07:11.658718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.637 [2024-11-08 04:07:11.662649] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.637 [2024-11-08 04:07:11.662769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.637 [2024-11-08 04:07:11.662789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.637 [2024-11-08 04:07:11.666750] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.637 [2024-11-08 04:07:11.666860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.637 [2024-11-08 04:07:11.666880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.637 [2024-11-08 04:07:11.670926] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.637 [2024-11-08 04:07:11.671034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.637 [2024-11-08 04:07:11.671053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.637 [2024-11-08 04:07:11.675008] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.637 [2024-11-08 04:07:11.675096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.637 [2024-11-08 04:07:11.675116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.637 [2024-11-08 04:07:11.679099] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.637 [2024-11-08 04:07:11.679255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.637 [2024-11-08 04:07:11.679275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.637 [2024-11-08 04:07:11.683204] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.637 [2024-11-08 04:07:11.683368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.637 [2024-11-08 04:07:11.683388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.637 [2024-11-08 04:07:11.687352] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.637 [2024-11-08 04:07:11.687582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.637 [2024-11-08 04:07:11.687603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.637 [2024-11-08 04:07:11.691448] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.637 [2024-11-08 04:07:11.691657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.637 [2024-11-08 04:07:11.691692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.637 [2024-11-08 04:07:11.695517] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.637 [2024-11-08 04:07:11.695635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.638 [2024-11-08 04:07:11.695655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.638 [2024-11-08 04:07:11.699634] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.638 [2024-11-08 04:07:11.699744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.638 [2024-11-08 04:07:11.699763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.638 [2024-11-08 04:07:11.703740] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.638 [2024-11-08 04:07:11.703845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.638 [2024-11-08 04:07:11.703865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.638 [2024-11-08 04:07:11.707759] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.638 [2024-11-08 04:07:11.707875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.638 [2024-11-08 04:07:11.707895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.638 [2024-11-08 04:07:11.711871] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.638 [2024-11-08 04:07:11.712013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.638 [2024-11-08 04:07:11.712033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.638 [2024-11-08 04:07:11.715965] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.638 [2024-11-08 04:07:11.716147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.638 [2024-11-08 04:07:11.716167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.638 [2024-11-08 04:07:11.720101] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.638 [2024-11-08 04:07:11.720305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.638 [2024-11-08 04:07:11.720325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.638 [2024-11-08 04:07:11.724152] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.638 [2024-11-08 04:07:11.724329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.638 [2024-11-08 04:07:11.724349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.638 [2024-11-08 04:07:11.728245] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.638 [2024-11-08 04:07:11.728332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.638 [2024-11-08 04:07:11.728351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.638 [2024-11-08 04:07:11.732357] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.638 [2024-11-08 04:07:11.732481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.638 [2024-11-08 04:07:11.732501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.638 [2024-11-08 04:07:11.736486] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.638 [2024-11-08 04:07:11.736570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.638 [2024-11-08 04:07:11.736589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.638 [2024-11-08 04:07:11.740601] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.638 [2024-11-08 04:07:11.740755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.638 [2024-11-08 04:07:11.740776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.899 [2024-11-08 04:07:11.744898] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.899 [2024-11-08 04:07:11.745048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.899 [2024-11-08 04:07:11.745067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.899 [2024-11-08 04:07:11.749038] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.899 [2024-11-08 04:07:11.749364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.899 [2024-11-08 04:07:11.749397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.899 [2024-11-08 04:07:11.753221] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.899 [2024-11-08 04:07:11.753419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.899 [2024-11-08 04:07:11.753440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.899 [2024-11-08 04:07:11.757252] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.899 [2024-11-08 04:07:11.757430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.899 [2024-11-08 04:07:11.757461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.899 [2024-11-08 04:07:11.761333] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.899 [2024-11-08 04:07:11.761471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.899 [2024-11-08 04:07:11.761542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.899 [2024-11-08 04:07:11.765531] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.899 [2024-11-08 04:07:11.765694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.899 [2024-11-08 04:07:11.765716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.899 [2024-11-08 04:07:11.769510] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.899 [2024-11-08 04:07:11.769644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.899 [2024-11-08 04:07:11.769665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.899 [2024-11-08 04:07:11.773471] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.899 [2024-11-08 04:07:11.773624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.899 [2024-11-08 04:07:11.773645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.899 [2024-11-08 04:07:11.777559] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.899 [2024-11-08 04:07:11.777712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.899 [2024-11-08 04:07:11.777733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.899 [2024-11-08 04:07:11.781605] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.899 [2024-11-08 04:07:11.781852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.899 [2024-11-08 04:07:11.781903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.899 [2024-11-08 04:07:11.785709] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.899 [2024-11-08 04:07:11.785986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.899 [2024-11-08 04:07:11.786039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.899 [2024-11-08 04:07:11.789731] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.899 [2024-11-08 04:07:11.790055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.899 [2024-11-08 04:07:11.790087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.899 [2024-11-08 04:07:11.793817] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.899 [2024-11-08 04:07:11.793953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.899 [2024-11-08 04:07:11.793973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.899 [2024-11-08 04:07:11.797979] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.899 [2024-11-08 04:07:11.798088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.899 [2024-11-08 04:07:11.798107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.899 [2024-11-08 04:07:11.802076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.899 [2024-11-08 04:07:11.802184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.899 [2024-11-08 04:07:11.802204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.899 [2024-11-08 04:07:11.806165] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.899 [2024-11-08 04:07:11.806270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.899 [2024-11-08 04:07:11.806291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.899 [2024-11-08 04:07:11.810314] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.899 [2024-11-08 04:07:11.810513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.899 [2024-11-08 04:07:11.810535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.899 [2024-11-08 04:07:11.814534] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.899 [2024-11-08 04:07:11.814814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.899 [2024-11-08 04:07:11.814855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.899 [2024-11-08 04:07:11.818611] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.899 [2024-11-08 04:07:11.818879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.899 [2024-11-08 04:07:11.818899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.899 [2024-11-08 04:07:11.822821] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.899 [2024-11-08 04:07:11.823147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.899 [2024-11-08 04:07:11.823195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.899 [2024-11-08 04:07:11.826883] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.899 [2024-11-08 04:07:11.827028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.899 [2024-11-08 04:07:11.827048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.899 [2024-11-08 04:07:11.831256] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.899 [2024-11-08 04:07:11.831356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.899 [2024-11-08 04:07:11.831376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.899 [2024-11-08 04:07:11.835515] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.899 [2024-11-08 04:07:11.835611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.899 [2024-11-08 04:07:11.835633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.899 [2024-11-08 04:07:11.839691] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.900 [2024-11-08 04:07:11.839873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.900 [2024-11-08 04:07:11.839892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.900 [2024-11-08 04:07:11.843845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.900 [2024-11-08 04:07:11.843989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.900 [2024-11-08 04:07:11.844009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.900 [2024-11-08 04:07:11.847979] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.900 [2024-11-08 04:07:11.848157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.900 [2024-11-08 04:07:11.848177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.900 [2024-11-08 04:07:11.852250] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.900 [2024-11-08 04:07:11.852476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.900 [2024-11-08 04:07:11.852497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.900 [2024-11-08 04:07:11.856505] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.900 [2024-11-08 04:07:11.856846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.900 [2024-11-08 04:07:11.856882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.900 [2024-11-08 04:07:11.860680] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.900 [2024-11-08 04:07:11.860844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.900 [2024-11-08 04:07:11.860863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.900 [2024-11-08 04:07:11.864760] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.900 [2024-11-08 04:07:11.864908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.900 [2024-11-08 04:07:11.864928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.900 [2024-11-08 04:07:11.868845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.900 [2024-11-08 04:07:11.868941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.900 [2024-11-08 04:07:11.868961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.900 [2024-11-08 04:07:11.872900] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.900 [2024-11-08 04:07:11.872986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.900 [2024-11-08 04:07:11.873006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.900 [2024-11-08 04:07:11.876923] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.900 [2024-11-08 04:07:11.877077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.900 [2024-11-08 04:07:11.877096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.900 [2024-11-08 04:07:11.881045] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.900 [2024-11-08 04:07:11.881295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.900 [2024-11-08 04:07:11.881357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.900 [2024-11-08 04:07:11.885075] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.900 [2024-11-08 04:07:11.885201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.900 [2024-11-08 04:07:11.885221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.900 [2024-11-08 04:07:11.889220] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.900 [2024-11-08 04:07:11.889410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.900 [2024-11-08 04:07:11.889430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.900 [2024-11-08 04:07:11.893341] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.900 [2024-11-08 04:07:11.893467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.900 [2024-11-08 04:07:11.893511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.900 [2024-11-08 04:07:11.897514] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.900 [2024-11-08 04:07:11.897684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.900 [2024-11-08 04:07:11.897706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.900 [2024-11-08 04:07:11.901622] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.900 [2024-11-08 04:07:11.901759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.900 [2024-11-08 04:07:11.901779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.900 [2024-11-08 04:07:11.905717] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.900 [2024-11-08 04:07:11.905801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.900 [2024-11-08 04:07:11.905850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.900 [2024-11-08 04:07:11.909910] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.900 [2024-11-08 04:07:11.910070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.900 [2024-11-08 04:07:11.910089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.900 [2024-11-08 04:07:11.913969] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.900 [2024-11-08 04:07:11.914202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.900 [2024-11-08 04:07:11.914223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.900 [2024-11-08 04:07:11.918090] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.900 [2024-11-08 04:07:11.918344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.900 [2024-11-08 04:07:11.918405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.900 [2024-11-08 04:07:11.922172] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.900 [2024-11-08 04:07:11.922310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.900 [2024-11-08 04:07:11.922329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.900 [2024-11-08 04:07:11.926159] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.900 [2024-11-08 04:07:11.926252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.900 [2024-11-08 04:07:11.926272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.900 [2024-11-08 04:07:11.930270] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.900 [2024-11-08 04:07:11.930414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.900 [2024-11-08 04:07:11.930435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.900 [2024-11-08 04:07:11.934365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.900 [2024-11-08 04:07:11.934519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.900 [2024-11-08 04:07:11.934538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.900 [2024-11-08 04:07:11.938384] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.900 [2024-11-08 04:07:11.938478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.900 [2024-11-08 04:07:11.938511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.900 [2024-11-08 04:07:11.942499] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.900 [2024-11-08 04:07:11.942683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.900 [2024-11-08 04:07:11.942702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.901 [2024-11-08 04:07:11.946623] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.901 [2024-11-08 04:07:11.946961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.901 [2024-11-08 04:07:11.946998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.901 [2024-11-08 04:07:11.950639] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.901 [2024-11-08 04:07:11.950837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.901 [2024-11-08 04:07:11.950857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.901 [2024-11-08 04:07:11.954689] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.901 [2024-11-08 04:07:11.954869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.901 [2024-11-08 04:07:11.954889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.901 [2024-11-08 04:07:11.958667] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.901 [2024-11-08 04:07:11.958763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.901 [2024-11-08 04:07:11.958782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.901 [2024-11-08 04:07:11.962744] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.901 [2024-11-08 04:07:11.962900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.901 [2024-11-08 04:07:11.962919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.901 [2024-11-08 04:07:11.966747] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.901 [2024-11-08 04:07:11.966868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.901 [2024-11-08 04:07:11.966886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.901 [2024-11-08 04:07:11.970764] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.901 [2024-11-08 04:07:11.970872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.901 [2024-11-08 04:07:11.970891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.901 [2024-11-08 04:07:11.974854] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.901 [2024-11-08 04:07:11.975026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.901 [2024-11-08 04:07:11.975045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.901 [2024-11-08 04:07:11.979072] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.901 [2024-11-08 04:07:11.979349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.901 [2024-11-08 04:07:11.979395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.901 [2024-11-08 04:07:11.983092] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.901 [2024-11-08 04:07:11.983200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.901 [2024-11-08 04:07:11.983221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.901 [2024-11-08 04:07:11.987271] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.901 [2024-11-08 04:07:11.987389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.901 [2024-11-08 04:07:11.987408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.901 [2024-11-08 04:07:11.991313] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.901 [2024-11-08 04:07:11.991403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.901 [2024-11-08 04:07:11.991423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.901 [2024-11-08 04:07:11.995368] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.901 [2024-11-08 04:07:11.995515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.901 [2024-11-08 04:07:11.995535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.901 [2024-11-08 04:07:11.999451] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.901 [2024-11-08 04:07:11.999568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.901 [2024-11-08 04:07:11.999587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.901 [2024-11-08 04:07:12.003623] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:36.901 [2024-11-08 04:07:12.003789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.901 [2024-11-08 04:07:12.003809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.176 [2024-11-08 04:07:12.007772] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.176 [2024-11-08 04:07:12.007936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.176 [2024-11-08 04:07:12.007956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.176 [2024-11-08 04:07:12.011807] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.176 [2024-11-08 04:07:12.012148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.176 [2024-11-08 04:07:12.012185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.176 [2024-11-08 04:07:12.015952] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.176 [2024-11-08 04:07:12.016091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.176 [2024-11-08 04:07:12.016111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.176 [2024-11-08 04:07:12.020164] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.176 [2024-11-08 04:07:12.020289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.177 [2024-11-08 04:07:12.020308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.177 [2024-11-08 04:07:12.024162] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.177 [2024-11-08 04:07:12.024251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.177 [2024-11-08 04:07:12.024271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.177 [2024-11-08 04:07:12.028265] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.177 [2024-11-08 04:07:12.028408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.177 [2024-11-08 04:07:12.028455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.177 [2024-11-08 04:07:12.032438] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.177 [2024-11-08 04:07:12.032579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.177 [2024-11-08 04:07:12.032599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.177 [2024-11-08 04:07:12.036567] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.177 [2024-11-08 04:07:12.036677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.177 [2024-11-08 04:07:12.036697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.177 [2024-11-08 04:07:12.040759] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.177 [2024-11-08 04:07:12.040930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.177 [2024-11-08 04:07:12.040950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.177 [2024-11-08 04:07:12.044816] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.177 [2024-11-08 04:07:12.045091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.177 [2024-11-08 04:07:12.045155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.177 [2024-11-08 04:07:12.048956] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.177 [2024-11-08 04:07:12.049061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.177 [2024-11-08 04:07:12.049082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.177 [2024-11-08 04:07:12.053114] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.177 [2024-11-08 04:07:12.053315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.177 [2024-11-08 04:07:12.053335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.177 [2024-11-08 04:07:12.057173] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.177 [2024-11-08 04:07:12.057272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.177 [2024-11-08 04:07:12.057292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.177 [2024-11-08 04:07:12.061285] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.177 [2024-11-08 04:07:12.061441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.177 [2024-11-08 04:07:12.061473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.177 [2024-11-08 04:07:12.065402] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.177 [2024-11-08 04:07:12.065554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.177 [2024-11-08 04:07:12.065575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.177 [2024-11-08 04:07:12.069501] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.177 [2024-11-08 04:07:12.069611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.177 [2024-11-08 04:07:12.069632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.177 [2024-11-08 04:07:12.073650] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.177 [2024-11-08 04:07:12.073855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.177 [2024-11-08 04:07:12.073875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.177 [2024-11-08 04:07:12.077703] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.177 [2024-11-08 04:07:12.077892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.177 [2024-11-08 04:07:12.077927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.177 [2024-11-08 04:07:12.081826] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.177 [2024-11-08 04:07:12.081941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.177 [2024-11-08 04:07:12.081964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.177 [2024-11-08 04:07:12.085947] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.177 [2024-11-08 04:07:12.086128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.177 [2024-11-08 04:07:12.086147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.177 [2024-11-08 04:07:12.089928] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.177 [2024-11-08 04:07:12.090019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.177 [2024-11-08 04:07:12.090038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.177 [2024-11-08 04:07:12.094076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.177 [2024-11-08 04:07:12.094210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.177 [2024-11-08 04:07:12.094229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.177 [2024-11-08 04:07:12.098196] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.177 [2024-11-08 04:07:12.098303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.177 [2024-11-08 04:07:12.098322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.177 [2024-11-08 04:07:12.102193] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.177 [2024-11-08 04:07:12.102291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.177 [2024-11-08 04:07:12.102310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.177 [2024-11-08 04:07:12.106295] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.177 [2024-11-08 04:07:12.106475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.177 [2024-11-08 04:07:12.106507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.177 [2024-11-08 04:07:12.110485] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.177 [2024-11-08 04:07:12.110793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.177 [2024-11-08 04:07:12.110829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.177 [2024-11-08 04:07:12.114476] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.177 [2024-11-08 04:07:12.114572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.177 [2024-11-08 04:07:12.114592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.177 [2024-11-08 04:07:12.118600] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.177 [2024-11-08 04:07:12.118764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.177 [2024-11-08 04:07:12.118783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.177 [2024-11-08 04:07:12.122992] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.177 [2024-11-08 04:07:12.123185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.177 [2024-11-08 04:07:12.123204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.177 [2024-11-08 04:07:12.127240] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.177 [2024-11-08 04:07:12.127421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.177 [2024-11-08 04:07:12.127442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.177 [2024-11-08 04:07:12.131387] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.178 [2024-11-08 04:07:12.131530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.178 [2024-11-08 04:07:12.131549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.178 [2024-11-08 04:07:12.135505] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.178 [2024-11-08 04:07:12.135612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.178 [2024-11-08 04:07:12.135632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.178 [2024-11-08 04:07:12.139669] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.178 [2024-11-08 04:07:12.139829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.178 [2024-11-08 04:07:12.139848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.178 [2024-11-08 04:07:12.143788] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.178 [2024-11-08 04:07:12.143906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.178 [2024-11-08 04:07:12.143926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.178 [2024-11-08 04:07:12.147889] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.178 [2024-11-08 04:07:12.148007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.178 [2024-11-08 04:07:12.148027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.178 [2024-11-08 04:07:12.152029] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.178 [2024-11-08 04:07:12.152214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.178 [2024-11-08 04:07:12.152234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.178 [2024-11-08 04:07:12.156123] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.178 [2024-11-08 04:07:12.156445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.178 [2024-11-08 04:07:12.156503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.178 [2024-11-08 04:07:12.160168] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.178 [2024-11-08 04:07:12.160258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.178 [2024-11-08 04:07:12.160278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.178 [2024-11-08 04:07:12.164256] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.178 [2024-11-08 04:07:12.164414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.178 [2024-11-08 04:07:12.164461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.178 [2024-11-08 04:07:12.168477] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.178 [2024-11-08 04:07:12.168795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.178 [2024-11-08 04:07:12.168847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.178 [2024-11-08 04:07:12.172536] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.178 [2024-11-08 04:07:12.172789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.178 [2024-11-08 04:07:12.172839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.178 [2024-11-08 04:07:12.176583] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.178 [2024-11-08 04:07:12.176714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.178 [2024-11-08 04:07:12.176734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.178 [2024-11-08 04:07:12.180552] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.178 [2024-11-08 04:07:12.180646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.178 [2024-11-08 04:07:12.180665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.178 [2024-11-08 04:07:12.184646] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.178 [2024-11-08 04:07:12.184802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.178 [2024-11-08 04:07:12.184838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.178 [2024-11-08 04:07:12.188725] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.178 [2024-11-08 04:07:12.188837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.178 [2024-11-08 04:07:12.188856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.178 [2024-11-08 04:07:12.192826] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.178 [2024-11-08 04:07:12.192942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.178 [2024-11-08 04:07:12.192962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.178 [2024-11-08 04:07:12.196919] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.178 [2024-11-08 04:07:12.197084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.178 [2024-11-08 04:07:12.197103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.178 [2024-11-08 04:07:12.200944] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.178 [2024-11-08 04:07:12.201124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.178 [2024-11-08 04:07:12.201143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.178 [2024-11-08 04:07:12.205070] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.178 [2024-11-08 04:07:12.205249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.178 [2024-11-08 04:07:12.205269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.178 [2024-11-08 04:07:12.209056] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.178 [2024-11-08 04:07:12.209231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.178 [2024-11-08 04:07:12.209251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.178 [2024-11-08 04:07:12.213022] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.178 [2024-11-08 04:07:12.213144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.178 [2024-11-08 04:07:12.213165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.178 [2024-11-08 04:07:12.217156] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.178 [2024-11-08 04:07:12.217337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.178 [2024-11-08 04:07:12.217357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.178 [2024-11-08 04:07:12.221186] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.178 [2024-11-08 04:07:12.221329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.178 [2024-11-08 04:07:12.221349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.178 [2024-11-08 04:07:12.225222] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.178 [2024-11-08 04:07:12.225344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.178 [2024-11-08 04:07:12.225363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.178 [2024-11-08 04:07:12.229342] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.178 [2024-11-08 04:07:12.229563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.178 [2024-11-08 04:07:12.229584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.178 [2024-11-08 04:07:12.233410] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.178 [2024-11-08 04:07:12.233756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.178 [2024-11-08 04:07:12.233788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.178 [2024-11-08 04:07:12.237477] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.178 [2024-11-08 04:07:12.237644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.178 [2024-11-08 04:07:12.237665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.178 [2024-11-08 04:07:12.241605] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.178 [2024-11-08 04:07:12.241749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.179 [2024-11-08 04:07:12.241769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.179 [2024-11-08 04:07:12.245512] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.179 [2024-11-08 04:07:12.245612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.179 [2024-11-08 04:07:12.245632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.179 [2024-11-08 04:07:12.249645] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.179 [2024-11-08 04:07:12.249851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.179 [2024-11-08 04:07:12.249886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.179 [2024-11-08 04:07:12.253767] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.179 [2024-11-08 04:07:12.253884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.179 [2024-11-08 04:07:12.253918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.179 [2024-11-08 04:07:12.257794] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.179 [2024-11-08 04:07:12.257993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.179 [2024-11-08 04:07:12.258012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.179 [2024-11-08 04:07:12.261955] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.179 [2024-11-08 04:07:12.262147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.179 [2024-11-08 04:07:12.262167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.179 [2024-11-08 04:07:12.266065] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.179 [2024-11-08 04:07:12.266277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.179 [2024-11-08 04:07:12.266301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.179 [2024-11-08 04:07:12.270267] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.179 [2024-11-08 04:07:12.270372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.179 [2024-11-08 04:07:12.270394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.462 [2024-11-08 04:07:12.274504] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.462 [2024-11-08 04:07:12.274756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.462 [2024-11-08 04:07:12.274814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.462 [2024-11-08 04:07:12.278581] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.462 [2024-11-08 04:07:12.278681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.462 [2024-11-08 04:07:12.278702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.462 [2024-11-08 04:07:12.282788] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.462 [2024-11-08 04:07:12.282980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.462 [2024-11-08 04:07:12.283000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.462 [2024-11-08 04:07:12.286875] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.462 [2024-11-08 04:07:12.287063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.462 [2024-11-08 04:07:12.287083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.462 [2024-11-08 04:07:12.290952] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.462 [2024-11-08 04:07:12.291048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.462 [2024-11-08 04:07:12.291069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.462 [2024-11-08 04:07:12.295058] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.462 [2024-11-08 04:07:12.295220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.462 [2024-11-08 04:07:12.295239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.462 [2024-11-08 04:07:12.299069] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.462 [2024-11-08 04:07:12.299314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.462 [2024-11-08 04:07:12.299374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.462 [2024-11-08 04:07:12.303038] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.462 [2024-11-08 04:07:12.303136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.462 [2024-11-08 04:07:12.303156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.462 [2024-11-08 04:07:12.307248] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.462 [2024-11-08 04:07:12.307370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.462 [2024-11-08 04:07:12.307390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.462 [2024-11-08 04:07:12.311293] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.462 [2024-11-08 04:07:12.311399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.462 [2024-11-08 04:07:12.311419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.462 [2024-11-08 04:07:12.315357] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.462 [2024-11-08 04:07:12.315518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.462 [2024-11-08 04:07:12.315537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.462 [2024-11-08 04:07:12.319429] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.462 [2024-11-08 04:07:12.319545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.462 [2024-11-08 04:07:12.319564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.462 [2024-11-08 04:07:12.323472] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.462 [2024-11-08 04:07:12.323562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.462 [2024-11-08 04:07:12.323581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.462 [2024-11-08 04:07:12.327576] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.462 [2024-11-08 04:07:12.327743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.462 [2024-11-08 04:07:12.327762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.462 [2024-11-08 04:07:12.331628] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.462 [2024-11-08 04:07:12.331794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.462 [2024-11-08 04:07:12.331813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.462 [2024-11-08 04:07:12.335825] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.462 [2024-11-08 04:07:12.335933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.463 [2024-11-08 04:07:12.335952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.463 [2024-11-08 04:07:12.340047] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.463 [2024-11-08 04:07:12.340174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.463 [2024-11-08 04:07:12.340193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.463 [2024-11-08 04:07:12.344065] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.463 [2024-11-08 04:07:12.344155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.463 [2024-11-08 04:07:12.344175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.463 [2024-11-08 04:07:12.348167] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.463 [2024-11-08 04:07:12.348302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.463 [2024-11-08 04:07:12.348322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.463 [2024-11-08 04:07:12.352225] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.463 [2024-11-08 04:07:12.352317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.463 [2024-11-08 04:07:12.352337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.463 [2024-11-08 04:07:12.356352] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.463 [2024-11-08 04:07:12.356526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.463 [2024-11-08 04:07:12.356547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.463 [2024-11-08 04:07:12.360618] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.463 [2024-11-08 04:07:12.360803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.463 [2024-11-08 04:07:12.360822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.463 [2024-11-08 04:07:12.364745] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.463 [2024-11-08 04:07:12.364916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.463 [2024-11-08 04:07:12.364937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.463 [2024-11-08 04:07:12.368820] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.463 [2024-11-08 04:07:12.368941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.463 [2024-11-08 04:07:12.368961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.463 [2024-11-08 04:07:12.373010] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.463 [2024-11-08 04:07:12.373158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.463 [2024-11-08 04:07:12.373179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.463 [2024-11-08 04:07:12.377077] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.463 [2024-11-08 04:07:12.377207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.463 [2024-11-08 04:07:12.377227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.463 [2024-11-08 04:07:12.381296] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.463 [2024-11-08 04:07:12.381519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.463 [2024-11-08 04:07:12.381541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.463 [2024-11-08 04:07:12.385501] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.463 [2024-11-08 04:07:12.385672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.463 [2024-11-08 04:07:12.385694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.463 [2024-11-08 04:07:12.389585] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.463 [2024-11-08 04:07:12.389668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.463 [2024-11-08 04:07:12.389690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.463 [2024-11-08 04:07:12.393688] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.463 [2024-11-08 04:07:12.393926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.463 [2024-11-08 04:07:12.393947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.463 [2024-11-08 04:07:12.397775] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.463 [2024-11-08 04:07:12.398078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.463 [2024-11-08 04:07:12.398112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.463 [2024-11-08 04:07:12.401912] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.463 [2024-11-08 04:07:12.402112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.463 [2024-11-08 04:07:12.402131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.463 [2024-11-08 04:07:12.406025] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.463 [2024-11-08 04:07:12.406178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.463 [2024-11-08 04:07:12.406198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.463 [2024-11-08 04:07:12.410171] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.463 [2024-11-08 04:07:12.410280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.463 [2024-11-08 04:07:12.410300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.463 [2024-11-08 04:07:12.414267] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.463 [2024-11-08 04:07:12.414427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.463 [2024-11-08 04:07:12.414448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.463 [2024-11-08 04:07:12.418300] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.463 [2024-11-08 04:07:12.418426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.463 [2024-11-08 04:07:12.418446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.463 [2024-11-08 04:07:12.422354] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.463 [2024-11-08 04:07:12.422504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.463 [2024-11-08 04:07:12.422523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.463 [2024-11-08 04:07:12.426478] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.463 [2024-11-08 04:07:12.426646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.463 [2024-11-08 04:07:12.426666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.463 [2024-11-08 04:07:12.430505] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.463 [2024-11-08 04:07:12.430804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.463 [2024-11-08 04:07:12.430837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.463 [2024-11-08 04:07:12.434545] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.463 [2024-11-08 04:07:12.434644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.463 [2024-11-08 04:07:12.434663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.463 [2024-11-08 04:07:12.438710] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.463 [2024-11-08 04:07:12.438841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.463 [2024-11-08 04:07:12.438860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.464 [2024-11-08 04:07:12.442728] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.464 [2024-11-08 04:07:12.442849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.464 [2024-11-08 04:07:12.442868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.464 [2024-11-08 04:07:12.446797] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.464 [2024-11-08 04:07:12.446986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.464 [2024-11-08 04:07:12.447005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.464 [2024-11-08 04:07:12.450709] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.464 [2024-11-08 04:07:12.450858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.464 [2024-11-08 04:07:12.450877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.464 [2024-11-08 04:07:12.454725] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.464 [2024-11-08 04:07:12.454823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.464 [2024-11-08 04:07:12.454844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.464 [2024-11-08 04:07:12.458888] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.464 [2024-11-08 04:07:12.459063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.464 [2024-11-08 04:07:12.459083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.464 [2024-11-08 04:07:12.463008] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.464 [2024-11-08 04:07:12.463323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.464 [2024-11-08 04:07:12.463363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.464 [2024-11-08 04:07:12.467255] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.464 [2024-11-08 04:07:12.467478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.464 [2024-11-08 04:07:12.467499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.464 [2024-11-08 04:07:12.471344] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.464 [2024-11-08 04:07:12.471508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.464 [2024-11-08 04:07:12.471528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.464 [2024-11-08 04:07:12.475361] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.464 [2024-11-08 04:07:12.475485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.464 [2024-11-08 04:07:12.475504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.464 [2024-11-08 04:07:12.479508] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.464 [2024-11-08 04:07:12.479677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.464 [2024-11-08 04:07:12.479727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.464 [2024-11-08 04:07:12.483511] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.464 [2024-11-08 04:07:12.483623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.464 [2024-11-08 04:07:12.483643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.464 [2024-11-08 04:07:12.487565] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.464 [2024-11-08 04:07:12.487683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.464 [2024-11-08 04:07:12.487704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.464 [2024-11-08 04:07:12.491730] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.464 [2024-11-08 04:07:12.491938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.464 [2024-11-08 04:07:12.491959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.464 [2024-11-08 04:07:12.495746] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.464 [2024-11-08 04:07:12.495969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.464 [2024-11-08 04:07:12.496006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.464 [2024-11-08 04:07:12.499801] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.464 [2024-11-08 04:07:12.499914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.464 [2024-11-08 04:07:12.499937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.464 [2024-11-08 04:07:12.504007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.464 [2024-11-08 04:07:12.504197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.464 [2024-11-08 04:07:12.504217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.464 [2024-11-08 04:07:12.508076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.464 [2024-11-08 04:07:12.508228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.464 [2024-11-08 04:07:12.508248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.464 [2024-11-08 04:07:12.512176] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.464 [2024-11-08 04:07:12.512339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.464 [2024-11-08 04:07:12.512359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.464 [2024-11-08 04:07:12.516185] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.464 [2024-11-08 04:07:12.516329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.464 [2024-11-08 04:07:12.516349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.464 [2024-11-08 04:07:12.520294] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.464 [2024-11-08 04:07:12.520400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.464 [2024-11-08 04:07:12.520421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.464 [2024-11-08 04:07:12.524375] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.464 [2024-11-08 04:07:12.524578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.464 [2024-11-08 04:07:12.524599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.464 [2024-11-08 04:07:12.528399] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.464 [2024-11-08 04:07:12.528692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.464 [2024-11-08 04:07:12.528715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.464 [2024-11-08 04:07:12.532356] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.464 [2024-11-08 04:07:12.532494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.465 [2024-11-08 04:07:12.532515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.465 [2024-11-08 04:07:12.536563] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.465 [2024-11-08 04:07:12.536714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.465 [2024-11-08 04:07:12.536734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.465 [2024-11-08 04:07:12.540560] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.465 [2024-11-08 04:07:12.540679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.465 [2024-11-08 04:07:12.540698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.465 [2024-11-08 04:07:12.544659] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.465 [2024-11-08 04:07:12.544806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.465 [2024-11-08 04:07:12.544826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.465 [2024-11-08 04:07:12.548657] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.465 [2024-11-08 04:07:12.548837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.465 [2024-11-08 04:07:12.548857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.465 [2024-11-08 04:07:12.552636] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.465 [2024-11-08 04:07:12.552758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.465 [2024-11-08 04:07:12.552778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.465 [2024-11-08 04:07:12.556731] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.465 [2024-11-08 04:07:12.556906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.465 [2024-11-08 04:07:12.556926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.465 [2024-11-08 04:07:12.560742] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.465 [2024-11-08 04:07:12.560978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.465 [2024-11-08 04:07:12.561004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.465 [2024-11-08 04:07:12.564814] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.465 [2024-11-08 04:07:12.565177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.465 [2024-11-08 04:07:12.565231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.726 [2024-11-08 04:07:12.569119] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.726 [2024-11-08 04:07:12.569347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.726 [2024-11-08 04:07:12.569368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.726 [2024-11-08 04:07:12.573202] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.726 [2024-11-08 04:07:12.573337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.726 [2024-11-08 04:07:12.573356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.726 [2024-11-08 04:07:12.577435] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.726 [2024-11-08 04:07:12.577616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.726 [2024-11-08 04:07:12.577638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.726 [2024-11-08 04:07:12.581559] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.726 [2024-11-08 04:07:12.581751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.726 [2024-11-08 04:07:12.581773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.726 [2024-11-08 04:07:12.585652] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.726 [2024-11-08 04:07:12.585764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.726 [2024-11-08 04:07:12.585785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.726 [2024-11-08 04:07:12.589752] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.726 [2024-11-08 04:07:12.589993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.726 [2024-11-08 04:07:12.590028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.726 [2024-11-08 04:07:12.593922] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.726 [2024-11-08 04:07:12.594270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.726 [2024-11-08 04:07:12.594308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.726 [2024-11-08 04:07:12.598099] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.726 [2024-11-08 04:07:12.598401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.726 [2024-11-08 04:07:12.598438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.726 [2024-11-08 04:07:12.602191] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.726 [2024-11-08 04:07:12.602332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.726 [2024-11-08 04:07:12.602352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.726 [2024-11-08 04:07:12.606270] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.726 [2024-11-08 04:07:12.606398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.726 [2024-11-08 04:07:12.606417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.726 [2024-11-08 04:07:12.610370] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.726 [2024-11-08 04:07:12.610557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.726 [2024-11-08 04:07:12.610577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.726 [2024-11-08 04:07:12.614532] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.726 [2024-11-08 04:07:12.614667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.726 [2024-11-08 04:07:12.614686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.726 [2024-11-08 04:07:12.618598] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.726 [2024-11-08 04:07:12.618706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.726 [2024-11-08 04:07:12.618725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.726 [2024-11-08 04:07:12.622717] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.726 [2024-11-08 04:07:12.622885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.726 [2024-11-08 04:07:12.622906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.726 [2024-11-08 04:07:12.626764] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.726 [2024-11-08 04:07:12.627010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.726 [2024-11-08 04:07:12.627069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.726 [2024-11-08 04:07:12.630789] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.726 [2024-11-08 04:07:12.630891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.726 [2024-11-08 04:07:12.630911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.726 [2024-11-08 04:07:12.634975] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.726 [2024-11-08 04:07:12.635111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.726 [2024-11-08 04:07:12.635130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.726 [2024-11-08 04:07:12.639032] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.726 [2024-11-08 04:07:12.639144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.726 [2024-11-08 04:07:12.639164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.726 [2024-11-08 04:07:12.643187] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.727 [2024-11-08 04:07:12.643357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.727 [2024-11-08 04:07:12.643376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.727 [2024-11-08 04:07:12.647309] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.727 [2024-11-08 04:07:12.647432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.727 [2024-11-08 04:07:12.647465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.727 [2024-11-08 04:07:12.651284] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.727 [2024-11-08 04:07:12.651373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.727 [2024-11-08 04:07:12.651393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.727 [2024-11-08 04:07:12.655364] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.727 [2024-11-08 04:07:12.655538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.727 [2024-11-08 04:07:12.655557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.727 [2024-11-08 04:07:12.659469] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.727 [2024-11-08 04:07:12.659616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.727 [2024-11-08 04:07:12.659635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.727 [2024-11-08 04:07:12.663474] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.727 [2024-11-08 04:07:12.663576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.727 [2024-11-08 04:07:12.663596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.727 [2024-11-08 04:07:12.667637] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.727 [2024-11-08 04:07:12.667754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.727 [2024-11-08 04:07:12.667773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.727 [2024-11-08 04:07:12.671625] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.727 [2024-11-08 04:07:12.671746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.727 [2024-11-08 04:07:12.671765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.727 [2024-11-08 04:07:12.675691] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.727 [2024-11-08 04:07:12.675856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.727 [2024-11-08 04:07:12.675875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.727 [2024-11-08 04:07:12.679631] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.727 [2024-11-08 04:07:12.679731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.727 [2024-11-08 04:07:12.679750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.727 [2024-11-08 04:07:12.683714] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.727 [2024-11-08 04:07:12.683851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.727 [2024-11-08 04:07:12.683870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.727 [2024-11-08 04:07:12.687833] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.727 [2024-11-08 04:07:12.687990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.727 [2024-11-08 04:07:12.688009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.727 [2024-11-08 04:07:12.691879] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.727 [2024-11-08 04:07:12.692203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.727 [2024-11-08 04:07:12.692233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.727 [2024-11-08 04:07:12.695867] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.727 [2024-11-08 04:07:12.696011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.727 [2024-11-08 04:07:12.696029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.727 [2024-11-08 04:07:12.700061] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.727 [2024-11-08 04:07:12.700256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.727 [2024-11-08 04:07:12.700275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.727 [2024-11-08 04:07:12.704055] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.727 [2024-11-08 04:07:12.704143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.727 [2024-11-08 04:07:12.704163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.727 [2024-11-08 04:07:12.708172] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.727 [2024-11-08 04:07:12.708356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.727 [2024-11-08 04:07:12.708376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.727 [2024-11-08 04:07:12.712208] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.727 [2024-11-08 04:07:12.712314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.727 [2024-11-08 04:07:12.712334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.727 [2024-11-08 04:07:12.716254] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.727 [2024-11-08 04:07:12.716378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.727 [2024-11-08 04:07:12.716398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.727 [2024-11-08 04:07:12.720316] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.727 [2024-11-08 04:07:12.720491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.727 [2024-11-08 04:07:12.720511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.727 [2024-11-08 04:07:12.724443] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.727 [2024-11-08 04:07:12.724670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.727 [2024-11-08 04:07:12.724695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.727 [2024-11-08 04:07:12.728489] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.727 [2024-11-08 04:07:12.728700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.727 [2024-11-08 04:07:12.728719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.727 [2024-11-08 04:07:12.732581] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.727 [2024-11-08 04:07:12.732772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.727 [2024-11-08 04:07:12.732791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.727 [2024-11-08 04:07:12.736695] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.727 [2024-11-08 04:07:12.736800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.727 [2024-11-08 04:07:12.736820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.727 [2024-11-08 04:07:12.740800] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.727 [2024-11-08 04:07:12.740947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.727 [2024-11-08 04:07:12.740966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.727 [2024-11-08 04:07:12.744856] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.727 [2024-11-08 04:07:12.744979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.727 [2024-11-08 04:07:12.744999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.727 [2024-11-08 04:07:12.748940] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.727 [2024-11-08 04:07:12.749025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.727 [2024-11-08 04:07:12.749044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.727 [2024-11-08 04:07:12.753012] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.727 [2024-11-08 04:07:12.753172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.727 [2024-11-08 04:07:12.753191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.728 [2024-11-08 04:07:12.757087] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.728 [2024-11-08 04:07:12.757333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.728 [2024-11-08 04:07:12.757410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.728 [2024-11-08 04:07:12.761048] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.728 [2024-11-08 04:07:12.761179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.728 [2024-11-08 04:07:12.761199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.728 [2024-11-08 04:07:12.765178] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.728 [2024-11-08 04:07:12.765344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.728 [2024-11-08 04:07:12.765363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.728 [2024-11-08 04:07:12.769236] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.728 [2024-11-08 04:07:12.769606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.728 [2024-11-08 04:07:12.769640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.728 [2024-11-08 04:07:12.773304] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.728 [2024-11-08 04:07:12.773439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.728 [2024-11-08 04:07:12.773509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.728 [2024-11-08 04:07:12.777492] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.728 [2024-11-08 04:07:12.777689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.728 [2024-11-08 04:07:12.777711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.728 [2024-11-08 04:07:12.781564] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.728 [2024-11-08 04:07:12.781717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.728 [2024-11-08 04:07:12.781738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.728 [2024-11-08 04:07:12.785620] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.728 [2024-11-08 04:07:12.785714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.728 [2024-11-08 04:07:12.785735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.728 [2024-11-08 04:07:12.789779] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.728 [2024-11-08 04:07:12.790034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.728 [2024-11-08 04:07:12.790100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.728 [2024-11-08 04:07:12.793815] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.728 [2024-11-08 04:07:12.793937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.728 [2024-11-08 04:07:12.793957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.728 [2024-11-08 04:07:12.797882] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.728 [2024-11-08 04:07:12.798047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.728 [2024-11-08 04:07:12.798066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.728 [2024-11-08 04:07:12.801972] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.728 [2024-11-08 04:07:12.802088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.728 [2024-11-08 04:07:12.802108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.728 [2024-11-08 04:07:12.806108] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.728 [2024-11-08 04:07:12.806218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.728 [2024-11-08 04:07:12.806237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.728 [2024-11-08 04:07:12.810103] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.728 [2024-11-08 04:07:12.810292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.728 [2024-11-08 04:07:12.810312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.728 [2024-11-08 04:07:12.814217] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.728 [2024-11-08 04:07:12.814376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.728 [2024-11-08 04:07:12.814395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.728 [2024-11-08 04:07:12.818349] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.728 [2024-11-08 04:07:12.818497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.728 [2024-11-08 04:07:12.818529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.728 [2024-11-08 04:07:12.822347] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.728 [2024-11-08 04:07:12.822516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.728 [2024-11-08 04:07:12.822548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.728 [2024-11-08 04:07:12.826365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.728 [2024-11-08 04:07:12.826472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.728 [2024-11-08 04:07:12.826504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.728 [2024-11-08 04:07:12.830562] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.728 [2024-11-08 04:07:12.830844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.728 [2024-11-08 04:07:12.830881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.988 [2024-11-08 04:07:12.834787] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.988 [2024-11-08 04:07:12.834959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.988 [2024-11-08 04:07:12.834979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.988 [2024-11-08 04:07:12.838967] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.989 [2024-11-08 04:07:12.839183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.989 [2024-11-08 04:07:12.839204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.989 [2024-11-08 04:07:12.843163] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.989 [2024-11-08 04:07:12.843319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.989 [2024-11-08 04:07:12.843339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.989 [2024-11-08 04:07:12.847307] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.989 [2024-11-08 04:07:12.847549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.989 [2024-11-08 04:07:12.847571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.989 [2024-11-08 04:07:12.851397] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.989 [2024-11-08 04:07:12.851544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.989 [2024-11-08 04:07:12.851564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.989 [2024-11-08 04:07:12.855680] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.989 [2024-11-08 04:07:12.855933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.989 [2024-11-08 04:07:12.855951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.989 [2024-11-08 04:07:12.859771] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.989 [2024-11-08 04:07:12.859881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.989 [2024-11-08 04:07:12.859901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.989 [2024-11-08 04:07:12.864015] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.989 [2024-11-08 04:07:12.864157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.989 [2024-11-08 04:07:12.864175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.989 [2024-11-08 04:07:12.868182] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.989 [2024-11-08 04:07:12.868321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.989 [2024-11-08 04:07:12.868340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.989 [2024-11-08 04:07:12.872312] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.989 [2024-11-08 04:07:12.872443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.989 [2024-11-08 04:07:12.872464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.989 [2024-11-08 04:07:12.876470] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.989 [2024-11-08 04:07:12.876633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.989 [2024-11-08 04:07:12.876652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.989 [2024-11-08 04:07:12.880577] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.989 [2024-11-08 04:07:12.880907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.989 [2024-11-08 04:07:12.880948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.989 [2024-11-08 04:07:12.884609] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.989 [2024-11-08 04:07:12.884723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.989 [2024-11-08 04:07:12.884743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.989 [2024-11-08 04:07:12.888705] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.989 [2024-11-08 04:07:12.888868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.989 [2024-11-08 04:07:12.888888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.989 [2024-11-08 04:07:12.892704] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.989 [2024-11-08 04:07:12.892981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.989 [2024-11-08 04:07:12.893042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.989 [2024-11-08 04:07:12.896782] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.989 [2024-11-08 04:07:12.896883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.989 [2024-11-08 04:07:12.896903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.989 [2024-11-08 04:07:12.900950] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.989 [2024-11-08 04:07:12.901115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.989 [2024-11-08 04:07:12.901136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.989 [2024-11-08 04:07:12.904984] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.989 [2024-11-08 04:07:12.905091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.989 [2024-11-08 04:07:12.905112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.989 [2024-11-08 04:07:12.909094] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.989 [2024-11-08 04:07:12.909280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.989 [2024-11-08 04:07:12.909300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.989 [2024-11-08 04:07:12.913165] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.989 [2024-11-08 04:07:12.913287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.989 [2024-11-08 04:07:12.913308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.989 [2024-11-08 04:07:12.917288] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.989 [2024-11-08 04:07:12.917397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.989 [2024-11-08 04:07:12.917417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.989 [2024-11-08 04:07:12.921403] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.989 [2024-11-08 04:07:12.921646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.989 [2024-11-08 04:07:12.921667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.989 [2024-11-08 04:07:12.925573] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.989 [2024-11-08 04:07:12.925974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.989 [2024-11-08 04:07:12.926010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.989 [2024-11-08 04:07:12.929538] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.989 [2024-11-08 04:07:12.929727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.989 [2024-11-08 04:07:12.929748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.989 [2024-11-08 04:07:12.933639] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.989 [2024-11-08 04:07:12.933786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.990 [2024-11-08 04:07:12.933807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.990 [2024-11-08 04:07:12.937756] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.990 [2024-11-08 04:07:12.937887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.990 [2024-11-08 04:07:12.937907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.990 [2024-11-08 04:07:12.941901] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.990 [2024-11-08 04:07:12.942018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.990 [2024-11-08 04:07:12.942037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.990 [2024-11-08 04:07:12.945961] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.990 [2024-11-08 04:07:12.946084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.990 [2024-11-08 04:07:12.946103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.990 [2024-11-08 04:07:12.949974] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.990 [2024-11-08 04:07:12.950109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.990 [2024-11-08 04:07:12.950128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.990 [2024-11-08 04:07:12.954105] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.990 [2024-11-08 04:07:12.954274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.990 [2024-11-08 04:07:12.954293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.990 [2024-11-08 04:07:12.958140] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.990 [2024-11-08 04:07:12.958338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.990 [2024-11-08 04:07:12.958357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.990 [2024-11-08 04:07:12.962208] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.990 [2024-11-08 04:07:12.962418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.990 [2024-11-08 04:07:12.962438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.990 [2024-11-08 04:07:12.966282] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.990 [2024-11-08 04:07:12.966467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.990 [2024-11-08 04:07:12.966487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.990 [2024-11-08 04:07:12.970315] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.990 [2024-11-08 04:07:12.970424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.990 [2024-11-08 04:07:12.970443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.990 [2024-11-08 04:07:12.974408] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.990 [2024-11-08 04:07:12.974582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.990 [2024-11-08 04:07:12.974600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.990 [2024-11-08 04:07:12.978521] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.990 [2024-11-08 04:07:12.978689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.990 [2024-11-08 04:07:12.978708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.990 [2024-11-08 04:07:12.982510] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.990 [2024-11-08 04:07:12.982621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.990 [2024-11-08 04:07:12.982640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.990 [2024-11-08 04:07:12.986669] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.990 [2024-11-08 04:07:12.986846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.990 [2024-11-08 04:07:12.986866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.990 [2024-11-08 04:07:12.990759] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.990 [2024-11-08 04:07:12.990984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.990 [2024-11-08 04:07:12.991011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.990 [2024-11-08 04:07:12.994826] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.990 [2024-11-08 04:07:12.995022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.990 [2024-11-08 04:07:12.995041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.990 [2024-11-08 04:07:12.998842] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.990 [2024-11-08 04:07:12.998958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.990 [2024-11-08 04:07:12.998977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.990 [2024-11-08 04:07:13.002874] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.990 [2024-11-08 04:07:13.002997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.990 [2024-11-08 04:07:13.003016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.990 [2024-11-08 04:07:13.007005] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.990 [2024-11-08 04:07:13.007164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.990 [2024-11-08 04:07:13.007183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.990 [2024-11-08 04:07:13.011043] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.990 [2024-11-08 04:07:13.011149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.990 [2024-11-08 04:07:13.011168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.990 [2024-11-08 04:07:13.015002] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.990 [2024-11-08 04:07:13.015092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.990 [2024-11-08 04:07:13.015112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.990 [2024-11-08 04:07:13.019096] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.990 [2024-11-08 04:07:13.019260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.990 [2024-11-08 04:07:13.019280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.990 [2024-11-08 04:07:13.023181] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.990 [2024-11-08 04:07:13.023366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.990 [2024-11-08 04:07:13.023386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.990 [2024-11-08 04:07:13.027379] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.990 [2024-11-08 04:07:13.027551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.990 [2024-11-08 04:07:13.027571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.990 [2024-11-08 04:07:13.031516] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.990 [2024-11-08 04:07:13.031673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.990 [2024-11-08 04:07:13.031692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.990 [2024-11-08 04:07:13.035594] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.990 [2024-11-08 04:07:13.035698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.990 [2024-11-08 04:07:13.035719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.990 [2024-11-08 04:07:13.039727] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.990 [2024-11-08 04:07:13.039861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.990 [2024-11-08 04:07:13.039881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.990 [2024-11-08 04:07:13.043755] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.990 [2024-11-08 04:07:13.043850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.990 [2024-11-08 04:07:13.043869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.991 [2024-11-08 04:07:13.047799] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.991 [2024-11-08 04:07:13.047918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.991 [2024-11-08 04:07:13.047938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.991 [2024-11-08 04:07:13.051873] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.991 [2024-11-08 04:07:13.052037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.991 [2024-11-08 04:07:13.052057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.991 [2024-11-08 04:07:13.055968] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.991 [2024-11-08 04:07:13.056207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.991 [2024-11-08 04:07:13.056288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.991 [2024-11-08 04:07:13.060022] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.991 [2024-11-08 04:07:13.060114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.991 [2024-11-08 04:07:13.060134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.991 [2024-11-08 04:07:13.064152] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.991 [2024-11-08 04:07:13.064332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.991 [2024-11-08 04:07:13.064352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.991 [2024-11-08 04:07:13.068165] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.991 [2024-11-08 04:07:13.068255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.991 [2024-11-08 04:07:13.068275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.991 [2024-11-08 04:07:13.072243] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.991 [2024-11-08 04:07:13.072382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.991 [2024-11-08 04:07:13.072401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.991 [2024-11-08 04:07:13.076320] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.991 [2024-11-08 04:07:13.076426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.991 [2024-11-08 04:07:13.076457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.991 [2024-11-08 04:07:13.080365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.991 [2024-11-08 04:07:13.080486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.991 [2024-11-08 04:07:13.080507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.991 [2024-11-08 04:07:13.084439] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.991 [2024-11-08 04:07:13.084601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.991 [2024-11-08 04:07:13.084621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.991 [2024-11-08 04:07:13.088529] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.991 [2024-11-08 04:07:13.088752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.991 [2024-11-08 04:07:13.088773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.991 [2024-11-08 04:07:13.092726] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:37.991 [2024-11-08 04:07:13.092907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.991 [2024-11-08 04:07:13.092927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.251 [2024-11-08 04:07:13.096906] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.251 [2024-11-08 04:07:13.097052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.251 [2024-11-08 04:07:13.097072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.251 [2024-11-08 04:07:13.101086] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.251 [2024-11-08 04:07:13.101208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.252 [2024-11-08 04:07:13.101230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.252 [2024-11-08 04:07:13.105313] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.252 [2024-11-08 04:07:13.105571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.252 [2024-11-08 04:07:13.105594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.252 [2024-11-08 04:07:13.109312] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.252 [2024-11-08 04:07:13.109413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.252 [2024-11-08 04:07:13.109449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.252 [2024-11-08 04:07:13.113341] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.252 [2024-11-08 04:07:13.113469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.252 [2024-11-08 04:07:13.113521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.252 [2024-11-08 04:07:13.117452] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.252 [2024-11-08 04:07:13.117657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.252 [2024-11-08 04:07:13.117677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.252 [2024-11-08 04:07:13.121534] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.252 [2024-11-08 04:07:13.121903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.252 [2024-11-08 04:07:13.121958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.252 [2024-11-08 04:07:13.125615] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.252 [2024-11-08 04:07:13.125714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.252 [2024-11-08 04:07:13.125734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.252 [2024-11-08 04:07:13.129698] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.252 [2024-11-08 04:07:13.129887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.252 [2024-11-08 04:07:13.129922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.252 [2024-11-08 04:07:13.133570] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.252 [2024-11-08 04:07:13.133931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.252 [2024-11-08 04:07:13.133966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.252 [2024-11-08 04:07:13.137606] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.252 [2024-11-08 04:07:13.137689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.252 [2024-11-08 04:07:13.137721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.252 [2024-11-08 04:07:13.141685] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.252 [2024-11-08 04:07:13.141886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.252 [2024-11-08 04:07:13.141923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.252 [2024-11-08 04:07:13.145642] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.252 [2024-11-08 04:07:13.145801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.252 [2024-11-08 04:07:13.145821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.252 [2024-11-08 04:07:13.149608] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.252 [2024-11-08 04:07:13.149783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.252 [2024-11-08 04:07:13.149804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.252 [2024-11-08 04:07:13.153760] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.252 [2024-11-08 04:07:13.153984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.252 [2024-11-08 04:07:13.154002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.252 [2024-11-08 04:07:13.157854] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.252 [2024-11-08 04:07:13.158048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.252 [2024-11-08 04:07:13.158067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.252 [2024-11-08 04:07:13.161875] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.252 [2024-11-08 04:07:13.162038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.252 [2024-11-08 04:07:13.162058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.252 [2024-11-08 04:07:13.165940] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.252 [2024-11-08 04:07:13.166054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.252 [2024-11-08 04:07:13.166073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.252 [2024-11-08 04:07:13.170007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.252 [2024-11-08 04:07:13.170108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.252 [2024-11-08 04:07:13.170127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.252 [2024-11-08 04:07:13.174120] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.252 [2024-11-08 04:07:13.174277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.252 [2024-11-08 04:07:13.174296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.252 [2024-11-08 04:07:13.178125] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.252 [2024-11-08 04:07:13.178395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.252 [2024-11-08 04:07:13.178466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.252 [2024-11-08 04:07:13.182187] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.252 [2024-11-08 04:07:13.182314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.252 [2024-11-08 04:07:13.182332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.252 [2024-11-08 04:07:13.186333] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.252 [2024-11-08 04:07:13.186459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.252 [2024-11-08 04:07:13.186479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.252 [2024-11-08 04:07:13.190446] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.252 [2024-11-08 04:07:13.190543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.252 [2024-11-08 04:07:13.190563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.252 [2024-11-08 04:07:13.194627] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.252 [2024-11-08 04:07:13.194803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.252 [2024-11-08 04:07:13.194821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.252 [2024-11-08 04:07:13.198717] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.252 [2024-11-08 04:07:13.198821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.252 [2024-11-08 04:07:13.198840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.252 [2024-11-08 04:07:13.202832] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.252 [2024-11-08 04:07:13.202939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.252 [2024-11-08 04:07:13.202958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.252 [2024-11-08 04:07:13.206950] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.252 [2024-11-08 04:07:13.207116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.252 [2024-11-08 04:07:13.207135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.252 [2024-11-08 04:07:13.210998] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.252 [2024-11-08 04:07:13.211248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.253 [2024-11-08 04:07:13.211318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.253 [2024-11-08 04:07:13.215058] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.253 [2024-11-08 04:07:13.215169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.253 [2024-11-08 04:07:13.215188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.253 [2024-11-08 04:07:13.219150] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.253 [2024-11-08 04:07:13.219274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.253 [2024-11-08 04:07:13.219293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.253 [2024-11-08 04:07:13.223244] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.253 [2024-11-08 04:07:13.223328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.253 [2024-11-08 04:07:13.223348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.253 [2024-11-08 04:07:13.227276] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.253 [2024-11-08 04:07:13.227418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.253 [2024-11-08 04:07:13.227449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.253 [2024-11-08 04:07:13.231376] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.253 [2024-11-08 04:07:13.231488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.253 [2024-11-08 04:07:13.231507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.253 [2024-11-08 04:07:13.235381] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.253 [2024-11-08 04:07:13.235514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.253 [2024-11-08 04:07:13.235532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.253 [2024-11-08 04:07:13.239546] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.253 [2024-11-08 04:07:13.239708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.253 [2024-11-08 04:07:13.239727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.253 [2024-11-08 04:07:13.243515] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.253 [2024-11-08 04:07:13.243742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.253 [2024-11-08 04:07:13.243777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.253 [2024-11-08 04:07:13.247485] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.253 [2024-11-08 04:07:13.247648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.253 [2024-11-08 04:07:13.247667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.253 [2024-11-08 04:07:13.251668] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.253 [2024-11-08 04:07:13.251806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.253 [2024-11-08 04:07:13.251825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.253 [2024-11-08 04:07:13.255729] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.253 [2024-11-08 04:07:13.255831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.253 [2024-11-08 04:07:13.255852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.253 [2024-11-08 04:07:13.259825] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.253 [2024-11-08 04:07:13.259971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.253 [2024-11-08 04:07:13.259990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.253 [2024-11-08 04:07:13.263857] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.253 [2024-11-08 04:07:13.263975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.253 [2024-11-08 04:07:13.263996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.253 [2024-11-08 04:07:13.267979] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.253 [2024-11-08 04:07:13.268077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.253 [2024-11-08 04:07:13.268097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.253 [2024-11-08 04:07:13.272108] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.253 [2024-11-08 04:07:13.272265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.253 [2024-11-08 04:07:13.272284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.253 [2024-11-08 04:07:13.276185] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.253 [2024-11-08 04:07:13.276414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.253 [2024-11-08 04:07:13.276459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.253 [2024-11-08 04:07:13.280221] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.253 [2024-11-08 04:07:13.280419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.253 [2024-11-08 04:07:13.280449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.253 [2024-11-08 04:07:13.284281] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.253 [2024-11-08 04:07:13.284414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.253 [2024-11-08 04:07:13.284443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.253 [2024-11-08 04:07:13.288317] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.253 [2024-11-08 04:07:13.288450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.253 [2024-11-08 04:07:13.288481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.253 [2024-11-08 04:07:13.292481] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.253 [2024-11-08 04:07:13.292616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.253 [2024-11-08 04:07:13.292635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.253 [2024-11-08 04:07:13.296545] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.253 [2024-11-08 04:07:13.296688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.253 [2024-11-08 04:07:13.296707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.253 [2024-11-08 04:07:13.300617] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.253 [2024-11-08 04:07:13.300738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.253 [2024-11-08 04:07:13.300758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.253 [2024-11-08 04:07:13.304661] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.253 [2024-11-08 04:07:13.304832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.253 [2024-11-08 04:07:13.304851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.253 [2024-11-08 04:07:13.308760] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.253 [2024-11-08 04:07:13.309076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.253 [2024-11-08 04:07:13.309121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.253 [2024-11-08 04:07:13.312789] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.253 [2024-11-08 04:07:13.312884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.253 [2024-11-08 04:07:13.312904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.253 [2024-11-08 04:07:13.316952] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.253 [2024-11-08 04:07:13.317130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.253 [2024-11-08 04:07:13.317149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.254 [2024-11-08 04:07:13.321017] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.254 [2024-11-08 04:07:13.321110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.254 [2024-11-08 04:07:13.321130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.254 [2024-11-08 04:07:13.325073] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.254 [2024-11-08 04:07:13.325208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.254 [2024-11-08 04:07:13.325227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.254 [2024-11-08 04:07:13.329078] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.254 [2024-11-08 04:07:13.329229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.254 [2024-11-08 04:07:13.329248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.254 [2024-11-08 04:07:13.333109] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.254 [2024-11-08 04:07:13.333255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.254 [2024-11-08 04:07:13.333274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.254 [2024-11-08 04:07:13.337251] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.254 [2024-11-08 04:07:13.337404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.254 [2024-11-08 04:07:13.337423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.254 [2024-11-08 04:07:13.341322] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.254 [2024-11-08 04:07:13.341529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.254 [2024-11-08 04:07:13.341550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.254 [2024-11-08 04:07:13.345477] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.254 [2024-11-08 04:07:13.345601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.254 [2024-11-08 04:07:13.345623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.254 [2024-11-08 04:07:13.349667] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.254 [2024-11-08 04:07:13.349824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.254 [2024-11-08 04:07:13.349843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.254 [2024-11-08 04:07:13.353687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.254 [2024-11-08 04:07:13.353802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.254 [2024-11-08 04:07:13.353852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.254 [2024-11-08 04:07:13.357886] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.254 [2024-11-08 04:07:13.358054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.254 [2024-11-08 04:07:13.358089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.514 [2024-11-08 04:07:13.362080] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.514 [2024-11-08 04:07:13.362193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.514 [2024-11-08 04:07:13.362213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.514 [2024-11-08 04:07:13.366308] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.514 [2024-11-08 04:07:13.366407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.514 [2024-11-08 04:07:13.366427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.514 [2024-11-08 04:07:13.370379] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.514 [2024-11-08 04:07:13.370570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.514 [2024-11-08 04:07:13.370590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.514 [2024-11-08 04:07:13.374440] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.514 [2024-11-08 04:07:13.374634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.514 [2024-11-08 04:07:13.374654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.514 [2024-11-08 04:07:13.378428] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.514 [2024-11-08 04:07:13.378598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.514 [2024-11-08 04:07:13.378617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.514 [2024-11-08 04:07:13.382562] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.514 [2024-11-08 04:07:13.382766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.514 [2024-11-08 04:07:13.382786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.514 [2024-11-08 04:07:13.386626] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.514 [2024-11-08 04:07:13.386812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.514 [2024-11-08 04:07:13.386832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.514 [2024-11-08 04:07:13.390682] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.514 [2024-11-08 04:07:13.390854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.514 [2024-11-08 04:07:13.390873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.514 [2024-11-08 04:07:13.394737] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.514 [2024-11-08 04:07:13.394851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.514 [2024-11-08 04:07:13.394872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.514 [2024-11-08 04:07:13.398784] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.514 [2024-11-08 04:07:13.398903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.514 [2024-11-08 04:07:13.398924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.514 [2024-11-08 04:07:13.402886] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.514 [2024-11-08 04:07:13.403052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.514 [2024-11-08 04:07:13.403072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.514 [2024-11-08 04:07:13.406951] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.514 [2024-11-08 04:07:13.407156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.514 [2024-11-08 04:07:13.407175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.514 [2024-11-08 04:07:13.411130] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.514 [2024-11-08 04:07:13.411298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.514 [2024-11-08 04:07:13.411317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.514 [2024-11-08 04:07:13.415254] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.514 [2024-11-08 04:07:13.415391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.514 [2024-11-08 04:07:13.415411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.514 [2024-11-08 04:07:13.419385] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.514 [2024-11-08 04:07:13.419497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.514 [2024-11-08 04:07:13.419518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.514 [2024-11-08 04:07:13.423518] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.514 [2024-11-08 04:07:13.423663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.514 [2024-11-08 04:07:13.423682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.514 [2024-11-08 04:07:13.427553] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.514 [2024-11-08 04:07:13.427647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.514 [2024-11-08 04:07:13.427667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.514 [2024-11-08 04:07:13.431541] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.514 [2024-11-08 04:07:13.431630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.514 [2024-11-08 04:07:13.431649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.514 [2024-11-08 04:07:13.435623] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.514 [2024-11-08 04:07:13.435787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.514 [2024-11-08 04:07:13.435806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.514 [2024-11-08 04:07:13.439652] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.514 [2024-11-08 04:07:13.439856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.514 [2024-11-08 04:07:13.439874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.514 [2024-11-08 04:07:13.443678] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.514 [2024-11-08 04:07:13.443898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.515 [2024-11-08 04:07:13.443917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.515 [2024-11-08 04:07:13.447714] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.515 [2024-11-08 04:07:13.447854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.515 [2024-11-08 04:07:13.447873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.515 [2024-11-08 04:07:13.451805] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.515 [2024-11-08 04:07:13.451907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.515 [2024-11-08 04:07:13.451927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.515 [2024-11-08 04:07:13.455926] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.515 [2024-11-08 04:07:13.456070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.515 [2024-11-08 04:07:13.456089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.515 [2024-11-08 04:07:13.459993] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.515 [2024-11-08 04:07:13.460096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.515 [2024-11-08 04:07:13.460115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.515 [2024-11-08 04:07:13.463970] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.515 [2024-11-08 04:07:13.464063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.515 [2024-11-08 04:07:13.464082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.515 [2024-11-08 04:07:13.468074] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.515 [2024-11-08 04:07:13.468247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.515 [2024-11-08 04:07:13.468266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.515 [2024-11-08 04:07:13.472085] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.515 [2024-11-08 04:07:13.472326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.515 [2024-11-08 04:07:13.472403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.515 [2024-11-08 04:07:13.476073] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.515 [2024-11-08 04:07:13.476171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.515 [2024-11-08 04:07:13.476190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.515 [2024-11-08 04:07:13.480717] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.515 [2024-11-08 04:07:13.480920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.515 [2024-11-08 04:07:13.480939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.515 [2024-11-08 04:07:13.484795] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14c0a90) with pdu=0x2000190fef90 00:23:38.515 [2024-11-08 04:07:13.484939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.515 [2024-11-08 04:07:13.484958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.515 00:23:38.515 Latency(us) 00:23:38.515 [2024-11-08T04:07:13.626Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.515 [2024-11-08T04:07:13.626Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:38.515 nvme0n1 : 2.00 7557.58 944.70 0.00 0.00 2112.59 1593.72 4855.62 00:23:38.515 [2024-11-08T04:07:13.626Z] =================================================================================================================== 00:23:38.515 [2024-11-08T04:07:13.626Z] Total : 7557.58 944.70 0.00 0.00 2112.59 1593.72 4855.62 00:23:38.515 0 00:23:38.515 04:07:13 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:38.515 04:07:13 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:38.515 04:07:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:38.515 04:07:13 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:38.515 | .driver_specific 00:23:38.515 | .nvme_error 00:23:38.515 | .status_code 00:23:38.515 | .command_transient_transport_error' 00:23:38.774 04:07:13 -- host/digest.sh@71 -- # (( 487 > 0 )) 00:23:38.775 04:07:13 -- host/digest.sh@73 -- # killprocess 87455 00:23:38.775 04:07:13 -- common/autotest_common.sh@936 -- # '[' -z 87455 ']' 00:23:38.775 04:07:13 -- common/autotest_common.sh@940 -- # kill -0 87455 00:23:38.775 04:07:13 -- common/autotest_common.sh@941 -- # uname 00:23:38.775 04:07:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:38.775 04:07:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87455 00:23:38.775 04:07:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:38.775 04:07:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:38.775 killing process with pid 87455 00:23:38.775 04:07:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87455' 00:23:38.775 Received shutdown signal, test time was about 2.000000 seconds 00:23:38.775 00:23:38.775 Latency(us) 00:23:38.775 [2024-11-08T04:07:13.886Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.775 [2024-11-08T04:07:13.886Z] =================================================================================================================== 00:23:38.775 [2024-11-08T04:07:13.886Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:38.775 04:07:13 -- common/autotest_common.sh@955 -- # kill 87455 00:23:38.775 04:07:13 -- common/autotest_common.sh@960 -- # wait 87455 00:23:39.034 04:07:14 -- host/digest.sh@115 -- # killprocess 87146 00:23:39.034 04:07:14 -- common/autotest_common.sh@936 -- # '[' -z 87146 ']' 00:23:39.034 04:07:14 -- common/autotest_common.sh@940 -- # kill -0 87146 00:23:39.034 04:07:14 -- common/autotest_common.sh@941 -- # uname 00:23:39.034 04:07:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:39.034 04:07:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87146 00:23:39.034 04:07:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:39.034 04:07:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:39.034 killing process with pid 87146 00:23:39.034 04:07:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87146' 00:23:39.034 04:07:14 -- common/autotest_common.sh@955 -- # kill 87146 00:23:39.034 04:07:14 -- common/autotest_common.sh@960 -- # wait 87146 00:23:39.292 00:23:39.292 real 0m18.271s 00:23:39.292 user 0m33.332s 00:23:39.292 sys 0m5.414s 00:23:39.292 04:07:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:39.292 04:07:14 -- common/autotest_common.sh@10 -- # set +x 00:23:39.292 ************************************ 00:23:39.292 END TEST nvmf_digest_error 00:23:39.292 ************************************ 00:23:39.551 04:07:14 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:23:39.551 04:07:14 -- host/digest.sh@139 -- # nvmftestfini 00:23:39.551 04:07:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:39.551 04:07:14 -- nvmf/common.sh@116 -- # sync 00:23:39.551 04:07:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:39.551 04:07:14 -- nvmf/common.sh@119 -- # set +e 00:23:39.551 04:07:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:39.551 04:07:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:39.551 rmmod nvme_tcp 00:23:39.551 rmmod nvme_fabrics 00:23:39.551 rmmod nvme_keyring 00:23:39.551 04:07:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:39.551 04:07:14 -- nvmf/common.sh@123 -- # set -e 00:23:39.551 04:07:14 -- nvmf/common.sh@124 -- # return 0 00:23:39.551 04:07:14 -- nvmf/common.sh@477 -- # '[' -n 87146 ']' 00:23:39.551 04:07:14 -- nvmf/common.sh@478 -- # killprocess 87146 00:23:39.551 04:07:14 -- common/autotest_common.sh@936 -- # '[' -z 87146 ']' 00:23:39.551 04:07:14 -- common/autotest_common.sh@940 -- # kill -0 87146 00:23:39.551 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (87146) - No such process 00:23:39.551 Process with pid 87146 is not found 00:23:39.551 04:07:14 -- common/autotest_common.sh@963 -- # echo 'Process with pid 87146 is not found' 00:23:39.551 04:07:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:39.551 04:07:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:39.551 04:07:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:39.551 04:07:14 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:39.551 04:07:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:39.551 04:07:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.551 04:07:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:39.551 04:07:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.551 04:07:14 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:39.551 00:23:39.551 real 0m37.834s 00:23:39.551 user 1m8.004s 00:23:39.551 sys 0m11.217s 00:23:39.551 04:07:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:39.551 04:07:14 -- common/autotest_common.sh@10 -- # set +x 00:23:39.551 ************************************ 00:23:39.551 END TEST nvmf_digest 00:23:39.551 ************************************ 00:23:39.551 04:07:14 -- nvmf/nvmf.sh@110 -- # [[ 1 -eq 1 ]] 00:23:39.551 04:07:14 -- nvmf/nvmf.sh@110 -- # [[ tcp == \t\c\p ]] 00:23:39.551 04:07:14 -- nvmf/nvmf.sh@112 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:23:39.551 04:07:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:39.551 04:07:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:39.551 04:07:14 -- common/autotest_common.sh@10 -- # set +x 00:23:39.551 ************************************ 00:23:39.551 START TEST nvmf_mdns_discovery 00:23:39.551 ************************************ 00:23:39.551 04:07:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:23:39.811 * Looking for test storage... 00:23:39.811 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:39.811 04:07:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:39.811 04:07:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:39.811 04:07:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:39.811 04:07:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:39.811 04:07:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:39.811 04:07:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:39.811 04:07:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:39.811 04:07:14 -- scripts/common.sh@335 -- # IFS=.-: 00:23:39.811 04:07:14 -- scripts/common.sh@335 -- # read -ra ver1 00:23:39.811 04:07:14 -- scripts/common.sh@336 -- # IFS=.-: 00:23:39.811 04:07:14 -- scripts/common.sh@336 -- # read -ra ver2 00:23:39.811 04:07:14 -- scripts/common.sh@337 -- # local 'op=<' 00:23:39.811 04:07:14 -- scripts/common.sh@339 -- # ver1_l=2 00:23:39.811 04:07:14 -- scripts/common.sh@340 -- # ver2_l=1 00:23:39.811 04:07:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:39.811 04:07:14 -- scripts/common.sh@343 -- # case "$op" in 00:23:39.811 04:07:14 -- scripts/common.sh@344 -- # : 1 00:23:39.811 04:07:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:39.811 04:07:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:39.811 04:07:14 -- scripts/common.sh@364 -- # decimal 1 00:23:39.811 04:07:14 -- scripts/common.sh@352 -- # local d=1 00:23:39.811 04:07:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:39.811 04:07:14 -- scripts/common.sh@354 -- # echo 1 00:23:39.811 04:07:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:39.811 04:07:14 -- scripts/common.sh@365 -- # decimal 2 00:23:39.811 04:07:14 -- scripts/common.sh@352 -- # local d=2 00:23:39.811 04:07:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:39.811 04:07:14 -- scripts/common.sh@354 -- # echo 2 00:23:39.811 04:07:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:39.811 04:07:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:39.811 04:07:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:39.811 04:07:14 -- scripts/common.sh@367 -- # return 0 00:23:39.811 04:07:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:39.811 04:07:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:39.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.811 --rc genhtml_branch_coverage=1 00:23:39.811 --rc genhtml_function_coverage=1 00:23:39.811 --rc genhtml_legend=1 00:23:39.811 --rc geninfo_all_blocks=1 00:23:39.811 --rc geninfo_unexecuted_blocks=1 00:23:39.811 00:23:39.811 ' 00:23:39.811 04:07:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:39.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.811 --rc genhtml_branch_coverage=1 00:23:39.811 --rc genhtml_function_coverage=1 00:23:39.811 --rc genhtml_legend=1 00:23:39.811 --rc geninfo_all_blocks=1 00:23:39.811 --rc geninfo_unexecuted_blocks=1 00:23:39.811 00:23:39.811 ' 00:23:39.811 04:07:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:39.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.811 --rc genhtml_branch_coverage=1 00:23:39.811 --rc genhtml_function_coverage=1 00:23:39.811 --rc genhtml_legend=1 00:23:39.811 --rc geninfo_all_blocks=1 00:23:39.811 --rc geninfo_unexecuted_blocks=1 00:23:39.811 00:23:39.811 ' 00:23:39.811 04:07:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:39.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.811 --rc genhtml_branch_coverage=1 00:23:39.811 --rc genhtml_function_coverage=1 00:23:39.811 --rc genhtml_legend=1 00:23:39.811 --rc geninfo_all_blocks=1 00:23:39.811 --rc geninfo_unexecuted_blocks=1 00:23:39.811 00:23:39.811 ' 00:23:39.811 04:07:14 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:39.811 04:07:14 -- nvmf/common.sh@7 -- # uname -s 00:23:39.811 04:07:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:39.811 04:07:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:39.811 04:07:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:39.811 04:07:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:39.811 04:07:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:39.811 04:07:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:39.811 04:07:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:39.811 04:07:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:39.811 04:07:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:39.811 04:07:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:39.811 04:07:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:23:39.811 04:07:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:23:39.811 04:07:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:39.811 04:07:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:39.811 04:07:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:39.811 04:07:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:39.811 04:07:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.811 04:07:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.811 04:07:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.811 04:07:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.811 04:07:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.811 04:07:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.811 04:07:14 -- paths/export.sh@5 -- # export PATH 00:23:39.811 04:07:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.811 04:07:14 -- nvmf/common.sh@46 -- # : 0 00:23:39.811 04:07:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:39.811 04:07:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:39.811 04:07:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:39.811 04:07:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:39.811 04:07:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:39.811 04:07:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:39.811 04:07:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:39.811 04:07:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:39.811 04:07:14 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:23:39.811 04:07:14 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:23:39.811 04:07:14 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:39.811 04:07:14 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:39.811 04:07:14 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:23:39.811 04:07:14 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:39.811 04:07:14 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:23:39.811 04:07:14 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:23:39.811 04:07:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:39.811 04:07:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.811 04:07:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:39.811 04:07:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:39.811 04:07:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:39.811 04:07:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.811 04:07:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:39.812 04:07:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.812 04:07:14 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:39.812 04:07:14 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:39.812 04:07:14 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:39.812 04:07:14 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:39.812 04:07:14 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:39.812 04:07:14 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:39.812 04:07:14 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:39.812 04:07:14 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:39.812 04:07:14 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:39.812 04:07:14 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:39.812 04:07:14 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:39.812 04:07:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:39.812 04:07:14 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:39.812 04:07:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:39.812 04:07:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:39.812 04:07:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:39.812 04:07:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:39.812 04:07:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:39.812 04:07:14 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:39.812 04:07:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:39.812 Cannot find device "nvmf_tgt_br" 00:23:39.812 04:07:14 -- nvmf/common.sh@154 -- # true 00:23:39.812 04:07:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:39.812 Cannot find device "nvmf_tgt_br2" 00:23:39.812 04:07:14 -- nvmf/common.sh@155 -- # true 00:23:39.812 04:07:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:39.812 04:07:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:39.812 Cannot find device "nvmf_tgt_br" 00:23:39.812 04:07:14 -- nvmf/common.sh@157 -- # true 00:23:39.812 04:07:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:39.812 Cannot find device "nvmf_tgt_br2" 00:23:39.812 04:07:14 -- nvmf/common.sh@158 -- # true 00:23:39.812 04:07:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:40.071 04:07:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:40.071 04:07:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:40.071 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:40.071 04:07:14 -- nvmf/common.sh@161 -- # true 00:23:40.071 04:07:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:40.071 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:40.071 04:07:14 -- nvmf/common.sh@162 -- # true 00:23:40.071 04:07:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:40.071 04:07:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:40.071 04:07:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:40.071 04:07:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:40.071 04:07:15 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:40.071 04:07:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:40.071 04:07:15 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:40.071 04:07:15 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:40.071 04:07:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:40.071 04:07:15 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:40.071 04:07:15 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:40.071 04:07:15 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:40.071 04:07:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:40.071 04:07:15 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:40.071 04:07:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:40.071 04:07:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:40.071 04:07:15 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:40.071 04:07:15 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:40.071 04:07:15 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:40.071 04:07:15 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:40.071 04:07:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:40.071 04:07:15 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:40.071 04:07:15 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:40.071 04:07:15 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:40.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:40.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:23:40.071 00:23:40.071 --- 10.0.0.2 ping statistics --- 00:23:40.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.071 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:23:40.071 04:07:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:40.071 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:40.071 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:23:40.071 00:23:40.071 --- 10.0.0.3 ping statistics --- 00:23:40.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.071 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:23:40.071 04:07:15 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:40.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:40.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:23:40.071 00:23:40.071 --- 10.0.0.1 ping statistics --- 00:23:40.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.071 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:23:40.071 04:07:15 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:40.071 04:07:15 -- nvmf/common.sh@421 -- # return 0 00:23:40.071 04:07:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:40.071 04:07:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:40.071 04:07:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:40.071 04:07:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:40.071 04:07:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:40.071 04:07:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:40.071 04:07:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:40.330 04:07:15 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:40.330 04:07:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:40.330 04:07:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:40.330 04:07:15 -- common/autotest_common.sh@10 -- # set +x 00:23:40.330 04:07:15 -- nvmf/common.sh@469 -- # nvmfpid=87770 00:23:40.330 04:07:15 -- nvmf/common.sh@470 -- # waitforlisten 87770 00:23:40.330 04:07:15 -- common/autotest_common.sh@829 -- # '[' -z 87770 ']' 00:23:40.330 04:07:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.330 04:07:15 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:40.330 04:07:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:40.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.330 04:07:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.330 04:07:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:40.330 04:07:15 -- common/autotest_common.sh@10 -- # set +x 00:23:40.330 [2024-11-08 04:07:15.252922] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:40.330 [2024-11-08 04:07:15.253014] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:40.330 [2024-11-08 04:07:15.396357] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.589 [2024-11-08 04:07:15.499072] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:40.589 [2024-11-08 04:07:15.499254] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:40.589 [2024-11-08 04:07:15.499271] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:40.589 [2024-11-08 04:07:15.499282] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:40.589 [2024-11-08 04:07:15.499320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.157 04:07:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:41.157 04:07:16 -- common/autotest_common.sh@862 -- # return 0 00:23:41.157 04:07:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:41.157 04:07:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:41.157 04:07:16 -- common/autotest_common.sh@10 -- # set +x 00:23:41.416 04:07:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.416 04:07:16 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:23:41.416 04:07:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.416 04:07:16 -- common/autotest_common.sh@10 -- # set +x 00:23:41.416 04:07:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.416 04:07:16 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:23:41.416 04:07:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.416 04:07:16 -- common/autotest_common.sh@10 -- # set +x 00:23:41.416 04:07:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.416 04:07:16 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:41.416 04:07:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.416 04:07:16 -- common/autotest_common.sh@10 -- # set +x 00:23:41.416 [2024-11-08 04:07:16.405754] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.416 04:07:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.416 04:07:16 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:41.416 04:07:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.416 04:07:16 -- common/autotest_common.sh@10 -- # set +x 00:23:41.416 [2024-11-08 04:07:16.417965] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:41.416 04:07:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.416 04:07:16 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:41.416 04:07:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.416 04:07:16 -- common/autotest_common.sh@10 -- # set +x 00:23:41.416 null0 00:23:41.416 04:07:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.416 04:07:16 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:41.416 04:07:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.416 04:07:16 -- common/autotest_common.sh@10 -- # set +x 00:23:41.416 null1 00:23:41.417 04:07:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.417 04:07:16 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:23:41.417 04:07:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.417 04:07:16 -- common/autotest_common.sh@10 -- # set +x 00:23:41.417 null2 00:23:41.417 04:07:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.417 04:07:16 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:23:41.417 04:07:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.417 04:07:16 -- common/autotest_common.sh@10 -- # set +x 00:23:41.417 null3 00:23:41.417 04:07:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.417 04:07:16 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:23:41.417 04:07:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.417 04:07:16 -- common/autotest_common.sh@10 -- # set +x 00:23:41.417 04:07:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.417 04:07:16 -- host/mdns_discovery.sh@47 -- # hostpid=87820 00:23:41.417 04:07:16 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:41.417 04:07:16 -- host/mdns_discovery.sh@48 -- # waitforlisten 87820 /tmp/host.sock 00:23:41.417 04:07:16 -- common/autotest_common.sh@829 -- # '[' -z 87820 ']' 00:23:41.417 04:07:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:23:41.417 04:07:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:41.417 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:41.417 04:07:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:41.417 04:07:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:41.417 04:07:16 -- common/autotest_common.sh@10 -- # set +x 00:23:41.675 [2024-11-08 04:07:16.527722] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:41.675 [2024-11-08 04:07:16.527829] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87820 ] 00:23:41.675 [2024-11-08 04:07:16.669354] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.675 [2024-11-08 04:07:16.775457] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:41.675 [2024-11-08 04:07:16.775651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.611 04:07:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:42.611 04:07:17 -- common/autotest_common.sh@862 -- # return 0 00:23:42.611 04:07:17 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:23:42.611 04:07:17 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:23:42.611 04:07:17 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:23:42.611 04:07:17 -- host/mdns_discovery.sh@57 -- # avahipid=87850 00:23:42.611 04:07:17 -- host/mdns_discovery.sh@58 -- # sleep 1 00:23:42.611 04:07:17 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:23:42.611 04:07:17 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:23:42.611 Process 1064 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:23:42.611 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:23:42.611 Successfully dropped root privileges. 00:23:42.611 avahi-daemon 0.8 starting up. 00:23:42.611 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:23:42.611 Successfully called chroot(). 00:23:42.611 Successfully dropped remaining capabilities. 00:23:42.611 No service file found in /etc/avahi/services. 00:23:42.611 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:42.611 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:23:42.611 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:42.611 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:23:42.611 Network interface enumeration completed. 00:23:42.611 Registering new address record for fe80::b449:c6ff:fe34:6d6e on nvmf_tgt_if2.*. 00:23:42.611 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:23:42.611 Registering new address record for fe80::bc00:12ff:fe79:b496 on nvmf_tgt_if.*. 00:23:42.611 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:23:43.546 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 1040012040. 00:23:43.804 04:07:18 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:43.804 04:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.804 04:07:18 -- common/autotest_common.sh@10 -- # set +x 00:23:43.804 04:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.804 04:07:18 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:43.804 04:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.804 04:07:18 -- common/autotest_common.sh@10 -- # set +x 00:23:43.804 04:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.804 04:07:18 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:23:43.804 04:07:18 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:23:43.804 04:07:18 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:43.804 04:07:18 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:43.804 04:07:18 -- host/mdns_discovery.sh@68 -- # sort 00:23:43.804 04:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.804 04:07:18 -- common/autotest_common.sh@10 -- # set +x 00:23:43.804 04:07:18 -- host/mdns_discovery.sh@68 -- # xargs 00:23:43.804 04:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.804 04:07:18 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:23:43.804 04:07:18 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:23:43.804 04:07:18 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:43.804 04:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.804 04:07:18 -- common/autotest_common.sh@10 -- # set +x 00:23:43.804 04:07:18 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:43.804 04:07:18 -- host/mdns_discovery.sh@64 -- # sort 00:23:43.804 04:07:18 -- host/mdns_discovery.sh@64 -- # xargs 00:23:43.804 04:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.804 04:07:18 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:23:43.804 04:07:18 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:43.804 04:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.804 04:07:18 -- common/autotest_common.sh@10 -- # set +x 00:23:43.804 04:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.804 04:07:18 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:23:43.804 04:07:18 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:43.804 04:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.804 04:07:18 -- common/autotest_common.sh@10 -- # set +x 00:23:43.804 04:07:18 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:43.804 04:07:18 -- host/mdns_discovery.sh@68 -- # sort 00:23:43.804 04:07:18 -- host/mdns_discovery.sh@68 -- # xargs 00:23:43.804 04:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.804 04:07:18 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:23:43.804 04:07:18 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:23:43.804 04:07:18 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:43.804 04:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.804 04:07:18 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:43.804 04:07:18 -- common/autotest_common.sh@10 -- # set +x 00:23:43.805 04:07:18 -- host/mdns_discovery.sh@64 -- # sort 00:23:43.805 04:07:18 -- host/mdns_discovery.sh@64 -- # xargs 00:23:43.805 04:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.805 04:07:18 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:23:43.805 04:07:18 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:43.805 04:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.805 04:07:18 -- common/autotest_common.sh@10 -- # set +x 00:23:44.063 04:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.063 04:07:18 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:23:44.063 04:07:18 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:44.063 04:07:18 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:44.063 04:07:18 -- host/mdns_discovery.sh@68 -- # sort 00:23:44.063 04:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.063 04:07:18 -- host/mdns_discovery.sh@68 -- # xargs 00:23:44.063 04:07:18 -- common/autotest_common.sh@10 -- # set +x 00:23:44.063 04:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.063 [2024-11-08 04:07:18.974406] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:44.063 04:07:18 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:23:44.063 04:07:18 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:23:44.063 04:07:18 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:44.063 04:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.063 04:07:18 -- common/autotest_common.sh@10 -- # set +x 00:23:44.063 04:07:18 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:44.063 04:07:18 -- host/mdns_discovery.sh@64 -- # sort 00:23:44.063 04:07:18 -- host/mdns_discovery.sh@64 -- # xargs 00:23:44.063 04:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.063 04:07:19 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:23:44.063 04:07:19 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:44.063 04:07:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.063 04:07:19 -- common/autotest_common.sh@10 -- # set +x 00:23:44.063 [2024-11-08 04:07:19.034543] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:44.063 04:07:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.063 04:07:19 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:44.063 04:07:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.063 04:07:19 -- common/autotest_common.sh@10 -- # set +x 00:23:44.063 04:07:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.063 04:07:19 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:23:44.063 04:07:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.063 04:07:19 -- common/autotest_common.sh@10 -- # set +x 00:23:44.063 04:07:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.063 04:07:19 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:23:44.063 04:07:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.063 04:07:19 -- common/autotest_common.sh@10 -- # set +x 00:23:44.063 04:07:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.063 04:07:19 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:23:44.063 04:07:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.063 04:07:19 -- common/autotest_common.sh@10 -- # set +x 00:23:44.063 04:07:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.064 04:07:19 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:23:44.064 04:07:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.064 04:07:19 -- common/autotest_common.sh@10 -- # set +x 00:23:44.064 [2024-11-08 04:07:19.074516] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:23:44.064 04:07:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.064 04:07:19 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:23:44.064 04:07:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.064 04:07:19 -- common/autotest_common.sh@10 -- # set +x 00:23:44.064 [2024-11-08 04:07:19.082515] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:44.064 04:07:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.064 04:07:19 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=87907 00:23:44.064 04:07:19 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:23:44.064 04:07:19 -- host/mdns_discovery.sh@125 -- # sleep 5 00:23:44.998 [2024-11-08 04:07:19.874406] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:44.998 Established under name 'CDC' 00:23:45.257 [2024-11-08 04:07:20.274426] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:45.257 [2024-11-08 04:07:20.274453] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:23:45.257 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:45.257 cookie is 0 00:23:45.257 is_local: 1 00:23:45.257 our_own: 0 00:23:45.257 wide_area: 0 00:23:45.257 multicast: 1 00:23:45.257 cached: 1 00:23:45.515 [2024-11-08 04:07:20.374409] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:45.515 [2024-11-08 04:07:20.374435] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:23:45.515 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:45.515 cookie is 0 00:23:45.515 is_local: 1 00:23:45.515 our_own: 0 00:23:45.515 wide_area: 0 00:23:45.515 multicast: 1 00:23:45.515 cached: 1 00:23:46.450 [2024-11-08 04:07:21.279937] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:46.450 [2024-11-08 04:07:21.279964] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:46.450 [2024-11-08 04:07:21.279981] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:46.450 [2024-11-08 04:07:21.366022] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:23:46.450 [2024-11-08 04:07:21.379720] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:46.450 [2024-11-08 04:07:21.379739] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:46.450 [2024-11-08 04:07:21.379757] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:46.450 [2024-11-08 04:07:21.425392] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:46.450 [2024-11-08 04:07:21.425427] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:46.450 [2024-11-08 04:07:21.466525] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:23:46.450 [2024-11-08 04:07:21.521083] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:46.450 [2024-11-08 04:07:21.521107] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:49.737 04:07:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:49.737 04:07:24 -- common/autotest_common.sh@10 -- # set +x 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@80 -- # sort 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@80 -- # xargs 00:23:49.737 04:07:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@76 -- # sort 00:23:49.737 04:07:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:49.737 04:07:24 -- common/autotest_common.sh@10 -- # set +x 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@76 -- # xargs 00:23:49.737 04:07:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@68 -- # sort 00:23:49.737 04:07:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.737 04:07:24 -- common/autotest_common.sh@10 -- # set +x 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@68 -- # xargs 00:23:49.737 04:07:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:49.737 04:07:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.737 04:07:24 -- common/autotest_common.sh@10 -- # set +x 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@64 -- # sort 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@64 -- # xargs 00:23:49.737 04:07:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:49.737 04:07:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:49.737 04:07:24 -- common/autotest_common.sh@10 -- # set +x 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@72 -- # xargs 00:23:49.737 04:07:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:49.737 04:07:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@72 -- # xargs 00:23:49.737 04:07:24 -- common/autotest_common.sh@10 -- # set +x 00:23:49.737 04:07:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:49.737 04:07:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.737 04:07:24 -- common/autotest_common.sh@10 -- # set +x 00:23:49.737 04:07:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:49.737 04:07:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.737 04:07:24 -- common/autotest_common.sh@10 -- # set +x 00:23:49.737 04:07:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:23:49.737 04:07:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.737 04:07:24 -- common/autotest_common.sh@10 -- # set +x 00:23:49.737 04:07:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.737 04:07:24 -- host/mdns_discovery.sh@139 -- # sleep 1 00:23:50.673 04:07:25 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:23:50.673 04:07:25 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:50.673 04:07:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.673 04:07:25 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:50.673 04:07:25 -- common/autotest_common.sh@10 -- # set +x 00:23:50.673 04:07:25 -- host/mdns_discovery.sh@64 -- # sort 00:23:50.673 04:07:25 -- host/mdns_discovery.sh@64 -- # xargs 00:23:50.673 04:07:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.673 04:07:25 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:50.673 04:07:25 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:23:50.673 04:07:25 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:50.673 04:07:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.673 04:07:25 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:50.673 04:07:25 -- common/autotest_common.sh@10 -- # set +x 00:23:50.673 04:07:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.673 04:07:25 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:23:50.673 04:07:25 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:50.673 04:07:25 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:23:50.673 04:07:25 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:50.673 04:07:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.673 04:07:25 -- common/autotest_common.sh@10 -- # set +x 00:23:50.673 [2024-11-08 04:07:25.625139] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:50.673 [2024-11-08 04:07:25.625835] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:50.673 [2024-11-08 04:07:25.625863] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:50.673 [2024-11-08 04:07:25.625893] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:50.673 [2024-11-08 04:07:25.625904] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:50.673 04:07:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.673 04:07:25 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:23:50.673 04:07:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.673 04:07:25 -- common/autotest_common.sh@10 -- # set +x 00:23:50.673 [2024-11-08 04:07:25.633052] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:50.673 [2024-11-08 04:07:25.633846] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:50.673 [2024-11-08 04:07:25.633908] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:50.673 04:07:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.673 04:07:25 -- host/mdns_discovery.sh@149 -- # sleep 1 00:23:50.673 [2024-11-08 04:07:25.764927] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:23:50.673 [2024-11-08 04:07:25.765054] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:23:50.932 [2024-11-08 04:07:25.823103] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:50.932 [2024-11-08 04:07:25.823125] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:50.932 [2024-11-08 04:07:25.823131] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:50.932 [2024-11-08 04:07:25.823145] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:50.932 [2024-11-08 04:07:25.823196] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:50.932 [2024-11-08 04:07:25.823204] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:50.932 [2024-11-08 04:07:25.823209] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:50.932 [2024-11-08 04:07:25.823221] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:50.932 [2024-11-08 04:07:25.869015] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:50.932 [2024-11-08 04:07:25.869033] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:50.932 [2024-11-08 04:07:25.869078] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:50.932 [2024-11-08 04:07:25.869086] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:51.869 04:07:26 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:23:51.869 04:07:26 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:51.869 04:07:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.869 04:07:26 -- common/autotest_common.sh@10 -- # set +x 00:23:51.869 04:07:26 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:51.869 04:07:26 -- host/mdns_discovery.sh@68 -- # sort 00:23:51.869 04:07:26 -- host/mdns_discovery.sh@68 -- # xargs 00:23:51.869 04:07:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.869 04:07:26 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:51.869 04:07:26 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:23:51.869 04:07:26 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:51.869 04:07:26 -- host/mdns_discovery.sh@64 -- # sort 00:23:51.869 04:07:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.869 04:07:26 -- host/mdns_discovery.sh@64 -- # xargs 00:23:51.869 04:07:26 -- common/autotest_common.sh@10 -- # set +x 00:23:51.869 04:07:26 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:51.869 04:07:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.869 04:07:26 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:51.869 04:07:26 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:23:51.869 04:07:26 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:51.869 04:07:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.869 04:07:26 -- common/autotest_common.sh@10 -- # set +x 00:23:51.869 04:07:26 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:51.869 04:07:26 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:51.869 04:07:26 -- host/mdns_discovery.sh@72 -- # xargs 00:23:51.869 04:07:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.869 04:07:26 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:51.869 04:07:26 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:23:51.869 04:07:26 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:51.869 04:07:26 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:51.869 04:07:26 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:51.869 04:07:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.869 04:07:26 -- host/mdns_discovery.sh@72 -- # xargs 00:23:51.869 04:07:26 -- common/autotest_common.sh@10 -- # set +x 00:23:51.869 04:07:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.869 04:07:26 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:51.869 04:07:26 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:23:51.869 04:07:26 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:51.869 04:07:26 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:51.869 04:07:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.869 04:07:26 -- common/autotest_common.sh@10 -- # set +x 00:23:51.869 04:07:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.869 04:07:26 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:51.869 04:07:26 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:51.869 04:07:26 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:23:51.869 04:07:26 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:51.869 04:07:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.869 04:07:26 -- common/autotest_common.sh@10 -- # set +x 00:23:51.869 [2024-11-08 04:07:26.941897] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:51.869 [2024-11-08 04:07:26.941922] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:51.869 [2024-11-08 04:07:26.941950] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:51.869 [2024-11-08 04:07:26.941961] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:51.869 04:07:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.869 04:07:26 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:23:51.869 04:07:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.869 04:07:26 -- common/autotest_common.sh@10 -- # set +x 00:23:51.869 [2024-11-08 04:07:26.949913] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:51.869 [2024-11-08 04:07:26.949961] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:51.869 [2024-11-08 04:07:26.951682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.869 [2024-11-08 04:07:26.951833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.869 [2024-11-08 04:07:26.951849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.869 [2024-11-08 04:07:26.951858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.869 [2024-11-08 04:07:26.951866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.869 [2024-11-08 04:07:26.951882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.869 [2024-11-08 04:07:26.951890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.869 [2024-11-08 04:07:26.951898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.869 [2024-11-08 04:07:26.951905] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b73b70 is same with the state(5) to be set 00:23:51.869 04:07:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.869 [2024-11-08 04:07:26.954645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.869 [2024-11-08 04:07:26.954668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.869 [2024-11-08 04:07:26.954679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.869 [2024-11-08 04:07:26.954687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.869 [2024-11-08 04:07:26.954695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.869 [2024-11-08 04:07:26.954703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.869 [2024-11-08 04:07:26.954712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.869 [2024-11-08 04:07:26.954719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.869 [2024-11-08 04:07:26.954727] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0f410 is same with the state(5) to be set 00:23:51.869 04:07:26 -- host/mdns_discovery.sh@162 -- # sleep 1 00:23:51.869 [2024-11-08 04:07:26.961647] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b73b70 (9): Bad file descriptor 00:23:51.869 [2024-11-08 04:07:26.964613] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0f410 (9): Bad file descriptor 00:23:51.869 [2024-11-08 04:07:26.971664] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:51.869 [2024-11-08 04:07:26.971746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:51.869 [2024-11-08 04:07:26.971788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:51.869 [2024-11-08 04:07:26.971803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b73b70 with addr=10.0.0.2, port=4420 00:23:51.869 [2024-11-08 04:07:26.971812] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b73b70 is same with the state(5) to be set 00:23:51.869 [2024-11-08 04:07:26.971826] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b73b70 (9): Bad file descriptor 00:23:51.869 [2024-11-08 04:07:26.971838] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:51.869 [2024-11-08 04:07:26.971845] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:51.869 [2024-11-08 04:07:26.971854] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:51.869 [2024-11-08 04:07:26.971868] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:51.869 [2024-11-08 04:07:26.974622] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:51.869 [2024-11-08 04:07:26.974835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:51.869 [2024-11-08 04:07:26.974877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:51.869 [2024-11-08 04:07:26.974893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b0f410 with addr=10.0.0.3, port=4420 00:23:51.869 [2024-11-08 04:07:26.974902] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0f410 is same with the state(5) to be set 00:23:51.869 [2024-11-08 04:07:26.974925] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0f410 (9): Bad file descriptor 00:23:51.870 [2024-11-08 04:07:26.974940] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:51.870 [2024-11-08 04:07:26.974947] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:51.870 [2024-11-08 04:07:26.974955] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:51.870 [2024-11-08 04:07:26.974968] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:52.130 [2024-11-08 04:07:26.981713] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:52.130 [2024-11-08 04:07:26.981929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.130 [2024-11-08 04:07:26.981972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.130 [2024-11-08 04:07:26.981987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b73b70 with addr=10.0.0.2, port=4420 00:23:52.130 [2024-11-08 04:07:26.981996] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b73b70 is same with the state(5) to be set 00:23:52.130 [2024-11-08 04:07:26.982010] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b73b70 (9): Bad file descriptor 00:23:52.130 [2024-11-08 04:07:26.982037] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:52.130 [2024-11-08 04:07:26.982047] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:52.130 [2024-11-08 04:07:26.982055] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:52.130 [2024-11-08 04:07:26.982067] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:52.130 [2024-11-08 04:07:26.984801] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:52.130 [2024-11-08 04:07:26.984871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.130 [2024-11-08 04:07:26.984910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.130 [2024-11-08 04:07:26.984924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b0f410 with addr=10.0.0.3, port=4420 00:23:52.130 [2024-11-08 04:07:26.984933] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0f410 is same with the state(5) to be set 00:23:52.130 [2024-11-08 04:07:26.984947] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0f410 (9): Bad file descriptor 00:23:52.130 [2024-11-08 04:07:26.984959] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:52.130 [2024-11-08 04:07:26.984966] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:52.130 [2024-11-08 04:07:26.984973] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:52.130 [2024-11-08 04:07:26.984985] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:52.130 [2024-11-08 04:07:26.991897] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:52.130 [2024-11-08 04:07:26.992091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.130 [2024-11-08 04:07:26.992241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.130 [2024-11-08 04:07:26.992343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b73b70 with addr=10.0.0.2, port=4420 00:23:52.130 [2024-11-08 04:07:26.992504] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b73b70 is same with the state(5) to be set 00:23:52.130 [2024-11-08 04:07:26.992671] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b73b70 (9): Bad file descriptor 00:23:52.130 [2024-11-08 04:07:26.992690] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:52.130 [2024-11-08 04:07:26.992699] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:52.130 [2024-11-08 04:07:26.992707] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:52.130 [2024-11-08 04:07:26.992722] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:52.130 [2024-11-08 04:07:26.994847] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:52.130 [2024-11-08 04:07:26.995058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.130 [2024-11-08 04:07:26.995194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.130 [2024-11-08 04:07:26.995322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b0f410 with addr=10.0.0.3, port=4420 00:23:52.130 [2024-11-08 04:07:26.995491] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0f410 is same with the state(5) to be set 00:23:52.130 [2024-11-08 04:07:26.995695] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0f410 (9): Bad file descriptor 00:23:52.130 [2024-11-08 04:07:26.995933] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:52.130 [2024-11-08 04:07:26.996046] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:52.130 [2024-11-08 04:07:26.996061] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:52.130 [2024-11-08 04:07:26.996079] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:52.130 [2024-11-08 04:07:27.002059] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:52.130 [2024-11-08 04:07:27.002282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.130 [2024-11-08 04:07:27.002440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.130 [2024-11-08 04:07:27.002492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b73b70 with addr=10.0.0.2, port=4420 00:23:52.130 [2024-11-08 04:07:27.002607] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b73b70 is same with the state(5) to be set 00:23:52.130 [2024-11-08 04:07:27.002666] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b73b70 (9): Bad file descriptor 00:23:52.130 [2024-11-08 04:07:27.002801] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:52.130 [2024-11-08 04:07:27.002922] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:52.130 [2024-11-08 04:07:27.002973] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:52.130 [2024-11-08 04:07:27.003116] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:52.130 [2024-11-08 04:07:27.005020] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:52.130 [2024-11-08 04:07:27.005248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.130 [2024-11-08 04:07:27.005432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.130 [2024-11-08 04:07:27.005582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b0f410 with addr=10.0.0.3, port=4420 00:23:52.130 [2024-11-08 04:07:27.005704] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0f410 is same with the state(5) to be set 00:23:52.130 [2024-11-08 04:07:27.005872] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0f410 (9): Bad file descriptor 00:23:52.130 [2024-11-08 04:07:27.005907] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:52.130 [2024-11-08 04:07:27.005918] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:52.131 [2024-11-08 04:07:27.005926] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:52.131 [2024-11-08 04:07:27.005940] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:52.131 [2024-11-08 04:07:27.012244] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:52.131 [2024-11-08 04:07:27.012322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.131 [2024-11-08 04:07:27.012363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.131 [2024-11-08 04:07:27.012378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b73b70 with addr=10.0.0.2, port=4420 00:23:52.131 [2024-11-08 04:07:27.012387] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b73b70 is same with the state(5) to be set 00:23:52.131 [2024-11-08 04:07:27.012401] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b73b70 (9): Bad file descriptor 00:23:52.131 [2024-11-08 04:07:27.012413] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:52.131 [2024-11-08 04:07:27.012438] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:52.131 [2024-11-08 04:07:27.012446] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:52.131 [2024-11-08 04:07:27.012458] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:52.131 [2024-11-08 04:07:27.015211] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:52.131 [2024-11-08 04:07:27.015282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.131 [2024-11-08 04:07:27.015320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.131 [2024-11-08 04:07:27.015335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b0f410 with addr=10.0.0.3, port=4420 00:23:52.131 [2024-11-08 04:07:27.015344] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0f410 is same with the state(5) to be set 00:23:52.131 [2024-11-08 04:07:27.015357] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0f410 (9): Bad file descriptor 00:23:52.131 [2024-11-08 04:07:27.015369] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:52.131 [2024-11-08 04:07:27.015376] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:52.131 [2024-11-08 04:07:27.015384] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:52.131 [2024-11-08 04:07:27.015396] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:52.131 [2024-11-08 04:07:27.022292] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:52.131 [2024-11-08 04:07:27.022362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.131 [2024-11-08 04:07:27.022400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.131 [2024-11-08 04:07:27.022427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b73b70 with addr=10.0.0.2, port=4420 00:23:52.131 [2024-11-08 04:07:27.022439] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b73b70 is same with the state(5) to be set 00:23:52.131 [2024-11-08 04:07:27.022452] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b73b70 (9): Bad file descriptor 00:23:52.131 [2024-11-08 04:07:27.022464] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:52.131 [2024-11-08 04:07:27.022472] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:52.131 [2024-11-08 04:07:27.022480] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:52.131 [2024-11-08 04:07:27.022492] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:52.131 [2024-11-08 04:07:27.025256] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:52.131 [2024-11-08 04:07:27.025340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.131 [2024-11-08 04:07:27.025380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.131 [2024-11-08 04:07:27.025395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b0f410 with addr=10.0.0.3, port=4420 00:23:52.131 [2024-11-08 04:07:27.025404] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0f410 is same with the state(5) to be set 00:23:52.131 [2024-11-08 04:07:27.025444] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0f410 (9): Bad file descriptor 00:23:52.131 [2024-11-08 04:07:27.025459] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:52.131 [2024-11-08 04:07:27.025466] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:52.131 [2024-11-08 04:07:27.025474] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:52.131 [2024-11-08 04:07:27.025495] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:52.131 [2024-11-08 04:07:27.032337] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:52.131 [2024-11-08 04:07:27.032543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.131 [2024-11-08 04:07:27.032587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.131 [2024-11-08 04:07:27.032602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b73b70 with addr=10.0.0.2, port=4420 00:23:52.131 [2024-11-08 04:07:27.032612] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b73b70 is same with the state(5) to be set 00:23:52.131 [2024-11-08 04:07:27.032626] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b73b70 (9): Bad file descriptor 00:23:52.131 [2024-11-08 04:07:27.032638] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:52.131 [2024-11-08 04:07:27.032646] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:52.131 [2024-11-08 04:07:27.032654] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:52.131 [2024-11-08 04:07:27.032667] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:52.131 [2024-11-08 04:07:27.035313] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:52.131 [2024-11-08 04:07:27.035383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.131 [2024-11-08 04:07:27.035445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.131 [2024-11-08 04:07:27.035470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b0f410 with addr=10.0.0.3, port=4420 00:23:52.131 [2024-11-08 04:07:27.035480] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0f410 is same with the state(5) to be set 00:23:52.131 [2024-11-08 04:07:27.035494] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0f410 (9): Bad file descriptor 00:23:52.131 [2024-11-08 04:07:27.035506] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:52.131 [2024-11-08 04:07:27.035513] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:52.131 [2024-11-08 04:07:27.035520] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:52.131 [2024-11-08 04:07:27.035532] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:52.131 [2024-11-08 04:07:27.042511] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:52.131 [2024-11-08 04:07:27.042589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.131 [2024-11-08 04:07:27.042629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.131 [2024-11-08 04:07:27.042644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b73b70 with addr=10.0.0.2, port=4420 00:23:52.131 [2024-11-08 04:07:27.042654] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b73b70 is same with the state(5) to be set 00:23:52.131 [2024-11-08 04:07:27.042668] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b73b70 (9): Bad file descriptor 00:23:52.131 [2024-11-08 04:07:27.042679] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:52.131 [2024-11-08 04:07:27.042686] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:52.131 [2024-11-08 04:07:27.042694] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:52.131 [2024-11-08 04:07:27.042706] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:52.131 [2024-11-08 04:07:27.045358] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:52.131 [2024-11-08 04:07:27.045442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.131 [2024-11-08 04:07:27.045491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.131 [2024-11-08 04:07:27.045508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b0f410 with addr=10.0.0.3, port=4420 00:23:52.131 [2024-11-08 04:07:27.045517] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0f410 is same with the state(5) to be set 00:23:52.131 [2024-11-08 04:07:27.045532] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0f410 (9): Bad file descriptor 00:23:52.131 [2024-11-08 04:07:27.045544] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:52.131 [2024-11-08 04:07:27.045551] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:52.131 [2024-11-08 04:07:27.045564] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:52.131 [2024-11-08 04:07:27.045577] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:52.131 [2024-11-08 04:07:27.052559] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:52.131 [2024-11-08 04:07:27.052628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.131 [2024-11-08 04:07:27.052666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.131 [2024-11-08 04:07:27.052680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b73b70 with addr=10.0.0.2, port=4420 00:23:52.131 [2024-11-08 04:07:27.052689] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b73b70 is same with the state(5) to be set 00:23:52.131 [2024-11-08 04:07:27.052702] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b73b70 (9): Bad file descriptor 00:23:52.131 [2024-11-08 04:07:27.052714] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:52.131 [2024-11-08 04:07:27.052721] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:52.131 [2024-11-08 04:07:27.052729] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:52.131 [2024-11-08 04:07:27.052740] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:52.131 [2024-11-08 04:07:27.055402] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:52.132 [2024-11-08 04:07:27.055478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.132 [2024-11-08 04:07:27.055517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.132 [2024-11-08 04:07:27.055546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b0f410 with addr=10.0.0.3, port=4420 00:23:52.132 [2024-11-08 04:07:27.055555] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0f410 is same with the state(5) to be set 00:23:52.132 [2024-11-08 04:07:27.055571] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0f410 (9): Bad file descriptor 00:23:52.132 [2024-11-08 04:07:27.055583] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:52.132 [2024-11-08 04:07:27.055591] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:52.132 [2024-11-08 04:07:27.055598] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:52.132 [2024-11-08 04:07:27.055610] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:52.132 [2024-11-08 04:07:27.062603] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:52.132 [2024-11-08 04:07:27.062792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.132 [2024-11-08 04:07:27.062834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.132 [2024-11-08 04:07:27.062849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b73b70 with addr=10.0.0.2, port=4420 00:23:52.132 [2024-11-08 04:07:27.062859] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b73b70 is same with the state(5) to be set 00:23:52.132 [2024-11-08 04:07:27.062873] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b73b70 (9): Bad file descriptor 00:23:52.132 [2024-11-08 04:07:27.062895] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:52.132 [2024-11-08 04:07:27.062905] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:52.132 [2024-11-08 04:07:27.062913] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:52.132 [2024-11-08 04:07:27.062926] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:52.132 [2024-11-08 04:07:27.065453] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:52.132 [2024-11-08 04:07:27.065530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.132 [2024-11-08 04:07:27.065569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.132 [2024-11-08 04:07:27.065583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b0f410 with addr=10.0.0.3, port=4420 00:23:52.132 [2024-11-08 04:07:27.065592] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0f410 is same with the state(5) to be set 00:23:52.132 [2024-11-08 04:07:27.065605] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0f410 (9): Bad file descriptor 00:23:52.132 [2024-11-08 04:07:27.065617] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:52.132 [2024-11-08 04:07:27.065624] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:52.132 [2024-11-08 04:07:27.065632] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:52.132 [2024-11-08 04:07:27.065644] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:52.132 [2024-11-08 04:07:27.072758] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:52.132 [2024-11-08 04:07:27.072942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.132 [2024-11-08 04:07:27.072985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.132 [2024-11-08 04:07:27.073000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b73b70 with addr=10.0.0.2, port=4420 00:23:52.132 [2024-11-08 04:07:27.073010] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b73b70 is same with the state(5) to be set 00:23:52.132 [2024-11-08 04:07:27.073024] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b73b70 (9): Bad file descriptor 00:23:52.132 [2024-11-08 04:07:27.073051] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:52.132 [2024-11-08 04:07:27.073060] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:52.132 [2024-11-08 04:07:27.073067] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:52.132 [2024-11-08 04:07:27.073080] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:52.132 [2024-11-08 04:07:27.075497] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:52.132 [2024-11-08 04:07:27.075568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.132 [2024-11-08 04:07:27.075607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.132 [2024-11-08 04:07:27.075622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b0f410 with addr=10.0.0.3, port=4420 00:23:52.132 [2024-11-08 04:07:27.075631] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0f410 is same with the state(5) to be set 00:23:52.132 [2024-11-08 04:07:27.075645] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0f410 (9): Bad file descriptor 00:23:52.132 [2024-11-08 04:07:27.075656] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:52.132 [2024-11-08 04:07:27.075663] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:52.132 [2024-11-08 04:07:27.075671] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:52.132 [2024-11-08 04:07:27.075683] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:52.132 [2024-11-08 04:07:27.082050] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:52.132 [2024-11-08 04:07:27.082075] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:52.132 [2024-11-08 04:07:27.082092] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:52.132 [2024-11-08 04:07:27.082122] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:23:52.132 [2024-11-08 04:07:27.082135] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:52.132 [2024-11-08 04:07:27.082146] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:52.132 [2024-11-08 04:07:27.168116] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:52.132 [2024-11-08 04:07:27.168164] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:53.076 04:07:27 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:23:53.076 04:07:27 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:53.076 04:07:27 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:53.076 04:07:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.076 04:07:27 -- common/autotest_common.sh@10 -- # set +x 00:23:53.076 04:07:27 -- host/mdns_discovery.sh@68 -- # xargs 00:23:53.076 04:07:27 -- host/mdns_discovery.sh@68 -- # sort 00:23:53.076 04:07:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.076 04:07:28 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:53.076 04:07:28 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:23:53.076 04:07:28 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:53.076 04:07:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.076 04:07:28 -- common/autotest_common.sh@10 -- # set +x 00:23:53.076 04:07:28 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:53.076 04:07:28 -- host/mdns_discovery.sh@64 -- # sort 00:23:53.076 04:07:28 -- host/mdns_discovery.sh@64 -- # xargs 00:23:53.076 04:07:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.076 04:07:28 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:53.076 04:07:28 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:23:53.076 04:07:28 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:53.076 04:07:28 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:53.076 04:07:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.076 04:07:28 -- common/autotest_common.sh@10 -- # set +x 00:23:53.076 04:07:28 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:53.076 04:07:28 -- host/mdns_discovery.sh@72 -- # xargs 00:23:53.076 04:07:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.076 04:07:28 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:23:53.076 04:07:28 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:23:53.076 04:07:28 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:53.076 04:07:28 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:53.076 04:07:28 -- host/mdns_discovery.sh@72 -- # xargs 00:23:53.076 04:07:28 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:53.076 04:07:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.076 04:07:28 -- common/autotest_common.sh@10 -- # set +x 00:23:53.076 04:07:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.353 04:07:28 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:23:53.353 04:07:28 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:23:53.353 04:07:28 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:53.353 04:07:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.353 04:07:28 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:53.353 04:07:28 -- common/autotest_common.sh@10 -- # set +x 00:23:53.353 04:07:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.353 04:07:28 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:53.353 04:07:28 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:53.353 04:07:28 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:23:53.353 04:07:28 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:53.353 04:07:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.353 04:07:28 -- common/autotest_common.sh@10 -- # set +x 00:23:53.353 04:07:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.353 04:07:28 -- host/mdns_discovery.sh@172 -- # sleep 1 00:23:53.353 [2024-11-08 04:07:28.274409] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:54.318 04:07:29 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:23:54.318 04:07:29 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:54.318 04:07:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.318 04:07:29 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:54.318 04:07:29 -- host/mdns_discovery.sh@80 -- # sort 00:23:54.318 04:07:29 -- common/autotest_common.sh@10 -- # set +x 00:23:54.318 04:07:29 -- host/mdns_discovery.sh@80 -- # xargs 00:23:54.318 04:07:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.318 04:07:29 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:23:54.318 04:07:29 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:23:54.318 04:07:29 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:54.318 04:07:29 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:54.318 04:07:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.318 04:07:29 -- common/autotest_common.sh@10 -- # set +x 00:23:54.318 04:07:29 -- host/mdns_discovery.sh@68 -- # xargs 00:23:54.318 04:07:29 -- host/mdns_discovery.sh@68 -- # sort 00:23:54.318 04:07:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.318 04:07:29 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:23:54.318 04:07:29 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:23:54.318 04:07:29 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:54.318 04:07:29 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:54.318 04:07:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.318 04:07:29 -- common/autotest_common.sh@10 -- # set +x 00:23:54.318 04:07:29 -- host/mdns_discovery.sh@64 -- # sort 00:23:54.318 04:07:29 -- host/mdns_discovery.sh@64 -- # xargs 00:23:54.318 04:07:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.318 04:07:29 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:23:54.318 04:07:29 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:23:54.319 04:07:29 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:54.319 04:07:29 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:54.319 04:07:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.319 04:07:29 -- common/autotest_common.sh@10 -- # set +x 00:23:54.577 04:07:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.577 04:07:29 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:23:54.577 04:07:29 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:23:54.577 04:07:29 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:23:54.577 04:07:29 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:54.577 04:07:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.577 04:07:29 -- common/autotest_common.sh@10 -- # set +x 00:23:54.577 04:07:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.577 04:07:29 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:54.577 04:07:29 -- common/autotest_common.sh@650 -- # local es=0 00:23:54.577 04:07:29 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:54.577 04:07:29 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:54.577 04:07:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:54.577 04:07:29 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:54.577 04:07:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:54.577 04:07:29 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:54.577 04:07:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.577 04:07:29 -- common/autotest_common.sh@10 -- # set +x 00:23:54.577 [2024-11-08 04:07:29.491442] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:23:54.577 2024/11/08 04:07:29 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:54.577 request: 00:23:54.577 { 00:23:54.577 "method": "bdev_nvme_start_mdns_discovery", 00:23:54.577 "params": { 00:23:54.577 "name": "mdns", 00:23:54.577 "svcname": "_nvme-disc._http", 00:23:54.577 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:54.577 } 00:23:54.577 } 00:23:54.577 Got JSON-RPC error response 00:23:54.577 GoRPCClient: error on JSON-RPC call 00:23:54.577 04:07:29 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:54.577 04:07:29 -- common/autotest_common.sh@653 -- # es=1 00:23:54.577 04:07:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:54.577 04:07:29 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:54.577 04:07:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:54.577 04:07:29 -- host/mdns_discovery.sh@183 -- # sleep 5 00:23:54.835 [2024-11-08 04:07:29.880092] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:55.093 [2024-11-08 04:07:29.980087] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:55.093 [2024-11-08 04:07:30.080094] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:55.093 [2024-11-08 04:07:30.080113] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:23:55.093 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:55.093 cookie is 0 00:23:55.093 is_local: 1 00:23:55.093 our_own: 0 00:23:55.093 wide_area: 0 00:23:55.093 multicast: 1 00:23:55.093 cached: 1 00:23:55.093 [2024-11-08 04:07:30.180093] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:55.093 [2024-11-08 04:07:30.180111] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:23:55.093 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:55.093 cookie is 0 00:23:55.093 is_local: 1 00:23:55.093 our_own: 0 00:23:55.093 wide_area: 0 00:23:55.093 multicast: 1 00:23:55.093 cached: 1 00:23:56.029 [2024-11-08 04:07:31.091458] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:56.029 [2024-11-08 04:07:31.091478] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:56.029 [2024-11-08 04:07:31.091494] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:56.288 [2024-11-08 04:07:31.178544] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:23:56.288 [2024-11-08 04:07:31.191315] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:56.288 [2024-11-08 04:07:31.191333] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:56.288 [2024-11-08 04:07:31.191347] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:56.288 [2024-11-08 04:07:31.247083] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:56.288 [2024-11-08 04:07:31.247108] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:56.288 [2024-11-08 04:07:31.277226] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:23:56.288 [2024-11-08 04:07:31.335841] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:56.288 [2024-11-08 04:07:31.335866] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:59.573 04:07:34 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:23:59.573 04:07:34 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:59.573 04:07:34 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:59.573 04:07:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.573 04:07:34 -- common/autotest_common.sh@10 -- # set +x 00:23:59.573 04:07:34 -- host/mdns_discovery.sh@80 -- # sort 00:23:59.573 04:07:34 -- host/mdns_discovery.sh@80 -- # xargs 00:23:59.573 04:07:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.573 04:07:34 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:23:59.573 04:07:34 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:23:59.573 04:07:34 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:59.573 04:07:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.573 04:07:34 -- common/autotest_common.sh@10 -- # set +x 00:23:59.573 04:07:34 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:59.573 04:07:34 -- host/mdns_discovery.sh@76 -- # sort 00:23:59.573 04:07:34 -- host/mdns_discovery.sh@76 -- # xargs 00:23:59.573 04:07:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.573 04:07:34 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:59.573 04:07:34 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:23:59.573 04:07:34 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:59.573 04:07:34 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:59.573 04:07:34 -- host/mdns_discovery.sh@64 -- # sort 00:23:59.573 04:07:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.573 04:07:34 -- host/mdns_discovery.sh@64 -- # xargs 00:23:59.573 04:07:34 -- common/autotest_common.sh@10 -- # set +x 00:23:59.573 04:07:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.573 04:07:34 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:59.573 04:07:34 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:59.573 04:07:34 -- common/autotest_common.sh@650 -- # local es=0 00:23:59.573 04:07:34 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:59.573 04:07:34 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:59.573 04:07:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:59.573 04:07:34 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:59.573 04:07:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:59.573 04:07:34 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:59.573 04:07:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.573 04:07:34 -- common/autotest_common.sh@10 -- # set +x 00:23:59.573 [2024-11-08 04:07:34.673028] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:23:59.573 2024/11/08 04:07:34 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:59.573 request: 00:23:59.573 { 00:23:59.573 "method": "bdev_nvme_start_mdns_discovery", 00:23:59.574 "params": { 00:23:59.574 "name": "cdc", 00:23:59.574 "svcname": "_nvme-disc._tcp", 00:23:59.574 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:59.574 } 00:23:59.574 } 00:23:59.574 Got JSON-RPC error response 00:23:59.574 GoRPCClient: error on JSON-RPC call 00:23:59.574 04:07:34 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:59.574 04:07:34 -- common/autotest_common.sh@653 -- # es=1 00:23:59.574 04:07:34 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:59.574 04:07:34 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:59.574 04:07:34 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:59.832 04:07:34 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:23:59.832 04:07:34 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:59.832 04:07:34 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:59.832 04:07:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.832 04:07:34 -- common/autotest_common.sh@10 -- # set +x 00:23:59.832 04:07:34 -- host/mdns_discovery.sh@76 -- # xargs 00:23:59.832 04:07:34 -- host/mdns_discovery.sh@76 -- # sort 00:23:59.832 04:07:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.832 04:07:34 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:59.832 04:07:34 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:23:59.832 04:07:34 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:59.833 04:07:34 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:59.833 04:07:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.833 04:07:34 -- host/mdns_discovery.sh@64 -- # sort 00:23:59.833 04:07:34 -- common/autotest_common.sh@10 -- # set +x 00:23:59.833 04:07:34 -- host/mdns_discovery.sh@64 -- # xargs 00:23:59.833 04:07:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.833 04:07:34 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:59.833 04:07:34 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:59.833 04:07:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.833 04:07:34 -- common/autotest_common.sh@10 -- # set +x 00:23:59.833 04:07:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.833 04:07:34 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:23:59.833 04:07:34 -- host/mdns_discovery.sh@197 -- # kill 87820 00:23:59.833 04:07:34 -- host/mdns_discovery.sh@200 -- # wait 87820 00:23:59.833 [2024-11-08 04:07:34.940274] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:24:00.092 04:07:35 -- host/mdns_discovery.sh@201 -- # kill 87907 00:24:00.092 Got SIGTERM, quitting. 00:24:00.092 04:07:35 -- host/mdns_discovery.sh@202 -- # kill 87850 00:24:00.092 04:07:35 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:24:00.092 04:07:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:00.092 04:07:35 -- nvmf/common.sh@116 -- # sync 00:24:00.092 Got SIGTERM, quitting. 00:24:00.092 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:24:00.092 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:24:00.092 avahi-daemon 0.8 exiting. 00:24:00.092 04:07:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:00.092 04:07:35 -- nvmf/common.sh@119 -- # set +e 00:24:00.092 04:07:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:00.092 04:07:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:00.092 rmmod nvme_tcp 00:24:00.092 rmmod nvme_fabrics 00:24:00.092 rmmod nvme_keyring 00:24:00.092 04:07:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:00.092 04:07:35 -- nvmf/common.sh@123 -- # set -e 00:24:00.092 04:07:35 -- nvmf/common.sh@124 -- # return 0 00:24:00.092 04:07:35 -- nvmf/common.sh@477 -- # '[' -n 87770 ']' 00:24:00.092 04:07:35 -- nvmf/common.sh@478 -- # killprocess 87770 00:24:00.092 04:07:35 -- common/autotest_common.sh@936 -- # '[' -z 87770 ']' 00:24:00.092 04:07:35 -- common/autotest_common.sh@940 -- # kill -0 87770 00:24:00.092 04:07:35 -- common/autotest_common.sh@941 -- # uname 00:24:00.092 04:07:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:00.092 04:07:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87770 00:24:00.351 04:07:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:00.351 04:07:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:00.351 killing process with pid 87770 00:24:00.351 04:07:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87770' 00:24:00.351 04:07:35 -- common/autotest_common.sh@955 -- # kill 87770 00:24:00.351 04:07:35 -- common/autotest_common.sh@960 -- # wait 87770 00:24:00.351 04:07:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:00.351 04:07:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:00.351 04:07:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:00.351 04:07:35 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:00.351 04:07:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:00.351 04:07:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.351 04:07:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:00.351 04:07:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.610 04:07:35 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:00.610 00:24:00.610 real 0m20.853s 00:24:00.610 user 0m40.774s 00:24:00.610 sys 0m2.027s 00:24:00.610 04:07:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:00.610 04:07:35 -- common/autotest_common.sh@10 -- # set +x 00:24:00.610 ************************************ 00:24:00.610 END TEST nvmf_mdns_discovery 00:24:00.610 ************************************ 00:24:00.610 04:07:35 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:24:00.610 04:07:35 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:24:00.610 04:07:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:00.610 04:07:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:00.610 04:07:35 -- common/autotest_common.sh@10 -- # set +x 00:24:00.610 ************************************ 00:24:00.610 START TEST nvmf_multipath 00:24:00.610 ************************************ 00:24:00.610 04:07:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:24:00.610 * Looking for test storage... 00:24:00.610 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:00.610 04:07:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:00.610 04:07:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:00.610 04:07:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:00.610 04:07:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:00.610 04:07:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:00.610 04:07:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:00.610 04:07:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:00.610 04:07:35 -- scripts/common.sh@335 -- # IFS=.-: 00:24:00.610 04:07:35 -- scripts/common.sh@335 -- # read -ra ver1 00:24:00.610 04:07:35 -- scripts/common.sh@336 -- # IFS=.-: 00:24:00.610 04:07:35 -- scripts/common.sh@336 -- # read -ra ver2 00:24:00.610 04:07:35 -- scripts/common.sh@337 -- # local 'op=<' 00:24:00.610 04:07:35 -- scripts/common.sh@339 -- # ver1_l=2 00:24:00.610 04:07:35 -- scripts/common.sh@340 -- # ver2_l=1 00:24:00.610 04:07:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:00.610 04:07:35 -- scripts/common.sh@343 -- # case "$op" in 00:24:00.610 04:07:35 -- scripts/common.sh@344 -- # : 1 00:24:00.610 04:07:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:00.610 04:07:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:00.610 04:07:35 -- scripts/common.sh@364 -- # decimal 1 00:24:00.610 04:07:35 -- scripts/common.sh@352 -- # local d=1 00:24:00.610 04:07:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:00.610 04:07:35 -- scripts/common.sh@354 -- # echo 1 00:24:00.610 04:07:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:00.610 04:07:35 -- scripts/common.sh@365 -- # decimal 2 00:24:00.610 04:07:35 -- scripts/common.sh@352 -- # local d=2 00:24:00.610 04:07:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:00.610 04:07:35 -- scripts/common.sh@354 -- # echo 2 00:24:00.610 04:07:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:00.610 04:07:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:00.610 04:07:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:00.610 04:07:35 -- scripts/common.sh@367 -- # return 0 00:24:00.610 04:07:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:00.610 04:07:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:00.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.610 --rc genhtml_branch_coverage=1 00:24:00.610 --rc genhtml_function_coverage=1 00:24:00.610 --rc genhtml_legend=1 00:24:00.610 --rc geninfo_all_blocks=1 00:24:00.610 --rc geninfo_unexecuted_blocks=1 00:24:00.610 00:24:00.610 ' 00:24:00.610 04:07:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:00.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.610 --rc genhtml_branch_coverage=1 00:24:00.610 --rc genhtml_function_coverage=1 00:24:00.610 --rc genhtml_legend=1 00:24:00.610 --rc geninfo_all_blocks=1 00:24:00.610 --rc geninfo_unexecuted_blocks=1 00:24:00.610 00:24:00.610 ' 00:24:00.610 04:07:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:00.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.610 --rc genhtml_branch_coverage=1 00:24:00.610 --rc genhtml_function_coverage=1 00:24:00.610 --rc genhtml_legend=1 00:24:00.610 --rc geninfo_all_blocks=1 00:24:00.610 --rc geninfo_unexecuted_blocks=1 00:24:00.610 00:24:00.610 ' 00:24:00.610 04:07:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:00.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.610 --rc genhtml_branch_coverage=1 00:24:00.610 --rc genhtml_function_coverage=1 00:24:00.610 --rc genhtml_legend=1 00:24:00.610 --rc geninfo_all_blocks=1 00:24:00.610 --rc geninfo_unexecuted_blocks=1 00:24:00.610 00:24:00.610 ' 00:24:00.610 04:07:35 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:00.610 04:07:35 -- nvmf/common.sh@7 -- # uname -s 00:24:00.869 04:07:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:00.869 04:07:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:00.869 04:07:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:00.869 04:07:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:00.869 04:07:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:00.869 04:07:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:00.870 04:07:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:00.870 04:07:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:00.870 04:07:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:00.870 04:07:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:00.870 04:07:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:24:00.870 04:07:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:24:00.870 04:07:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:00.870 04:07:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:00.870 04:07:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:00.870 04:07:35 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:00.870 04:07:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:00.870 04:07:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:00.870 04:07:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:00.870 04:07:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.870 04:07:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.870 04:07:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.870 04:07:35 -- paths/export.sh@5 -- # export PATH 00:24:00.870 04:07:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.870 04:07:35 -- nvmf/common.sh@46 -- # : 0 00:24:00.870 04:07:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:00.870 04:07:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:00.870 04:07:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:00.870 04:07:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:00.870 04:07:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:00.870 04:07:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:00.870 04:07:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:00.870 04:07:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:00.870 04:07:35 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:00.870 04:07:35 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:00.870 04:07:35 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:00.870 04:07:35 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:00.870 04:07:35 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:00.870 04:07:35 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:00.870 04:07:35 -- host/multipath.sh@30 -- # nvmftestinit 00:24:00.870 04:07:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:00.870 04:07:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:00.870 04:07:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:00.870 04:07:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:00.870 04:07:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:00.870 04:07:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.870 04:07:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:00.870 04:07:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.870 04:07:35 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:00.870 04:07:35 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:00.870 04:07:35 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:00.870 04:07:35 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:00.870 04:07:35 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:00.870 04:07:35 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:00.870 04:07:35 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:00.870 04:07:35 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:00.870 04:07:35 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:00.870 04:07:35 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:00.870 04:07:35 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:00.870 04:07:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:00.870 04:07:35 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:00.870 04:07:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:00.870 04:07:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:00.870 04:07:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:00.870 04:07:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:00.870 04:07:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:00.870 04:07:35 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:00.870 04:07:35 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:00.870 Cannot find device "nvmf_tgt_br" 00:24:00.870 04:07:35 -- nvmf/common.sh@154 -- # true 00:24:00.870 04:07:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:00.870 Cannot find device "nvmf_tgt_br2" 00:24:00.870 04:07:35 -- nvmf/common.sh@155 -- # true 00:24:00.870 04:07:35 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:00.870 04:07:35 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:00.870 Cannot find device "nvmf_tgt_br" 00:24:00.870 04:07:35 -- nvmf/common.sh@157 -- # true 00:24:00.870 04:07:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:00.870 Cannot find device "nvmf_tgt_br2" 00:24:00.870 04:07:35 -- nvmf/common.sh@158 -- # true 00:24:00.870 04:07:35 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:00.870 04:07:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:00.870 04:07:35 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:00.870 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:00.870 04:07:35 -- nvmf/common.sh@161 -- # true 00:24:00.870 04:07:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:00.870 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:00.870 04:07:35 -- nvmf/common.sh@162 -- # true 00:24:00.870 04:07:35 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:00.870 04:07:35 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:00.870 04:07:35 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:00.870 04:07:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:00.870 04:07:35 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:00.870 04:07:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:00.870 04:07:35 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:00.870 04:07:35 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:00.870 04:07:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:00.870 04:07:35 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:00.870 04:07:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:00.870 04:07:35 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:00.870 04:07:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:00.870 04:07:35 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:00.870 04:07:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:01.129 04:07:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:01.129 04:07:35 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:01.129 04:07:36 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:01.129 04:07:36 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:01.129 04:07:36 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:01.129 04:07:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:01.129 04:07:36 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:01.129 04:07:36 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:01.129 04:07:36 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:01.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:01.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:24:01.129 00:24:01.129 --- 10.0.0.2 ping statistics --- 00:24:01.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.129 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:24:01.129 04:07:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:01.129 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:01.129 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:24:01.129 00:24:01.129 --- 10.0.0.3 ping statistics --- 00:24:01.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.129 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:24:01.129 04:07:36 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:01.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:01.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:24:01.129 00:24:01.129 --- 10.0.0.1 ping statistics --- 00:24:01.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.129 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:24:01.129 04:07:36 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:01.129 04:07:36 -- nvmf/common.sh@421 -- # return 0 00:24:01.129 04:07:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:01.129 04:07:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:01.129 04:07:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:01.129 04:07:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:01.129 04:07:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:01.129 04:07:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:01.129 04:07:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:01.129 04:07:36 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:24:01.129 04:07:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:01.129 04:07:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:01.129 04:07:36 -- common/autotest_common.sh@10 -- # set +x 00:24:01.129 04:07:36 -- nvmf/common.sh@469 -- # nvmfpid=88422 00:24:01.129 04:07:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:01.129 04:07:36 -- nvmf/common.sh@470 -- # waitforlisten 88422 00:24:01.129 04:07:36 -- common/autotest_common.sh@829 -- # '[' -z 88422 ']' 00:24:01.130 04:07:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.130 04:07:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:01.130 04:07:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.130 04:07:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:01.130 04:07:36 -- common/autotest_common.sh@10 -- # set +x 00:24:01.130 [2024-11-08 04:07:36.155490] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:01.130 [2024-11-08 04:07:36.155589] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:01.388 [2024-11-08 04:07:36.298559] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:01.388 [2024-11-08 04:07:36.407014] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:01.388 [2024-11-08 04:07:36.407191] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.388 [2024-11-08 04:07:36.407214] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.388 [2024-11-08 04:07:36.407225] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.388 [2024-11-08 04:07:36.407468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.388 [2024-11-08 04:07:36.407479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.324 04:07:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:02.324 04:07:37 -- common/autotest_common.sh@862 -- # return 0 00:24:02.324 04:07:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:02.324 04:07:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:02.324 04:07:37 -- common/autotest_common.sh@10 -- # set +x 00:24:02.324 04:07:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:02.324 04:07:37 -- host/multipath.sh@33 -- # nvmfapp_pid=88422 00:24:02.324 04:07:37 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:02.583 [2024-11-08 04:07:37.445094] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.583 04:07:37 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:02.841 Malloc0 00:24:02.841 04:07:37 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:03.100 04:07:37 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:03.358 04:07:38 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:03.358 [2024-11-08 04:07:38.451750] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:03.618 04:07:38 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:03.618 [2024-11-08 04:07:38.651900] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:03.618 04:07:38 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:03.618 04:07:38 -- host/multipath.sh@44 -- # bdevperf_pid=88526 00:24:03.618 04:07:38 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:03.618 04:07:38 -- host/multipath.sh@47 -- # waitforlisten 88526 /var/tmp/bdevperf.sock 00:24:03.618 04:07:38 -- common/autotest_common.sh@829 -- # '[' -z 88526 ']' 00:24:03.618 04:07:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:03.618 04:07:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:03.618 04:07:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:03.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:03.618 04:07:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:03.618 04:07:38 -- common/autotest_common.sh@10 -- # set +x 00:24:04.995 04:07:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:04.995 04:07:39 -- common/autotest_common.sh@862 -- # return 0 00:24:04.995 04:07:39 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:04.995 04:07:39 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:24:05.253 Nvme0n1 00:24:05.253 04:07:40 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:05.819 Nvme0n1 00:24:05.819 04:07:40 -- host/multipath.sh@78 -- # sleep 1 00:24:05.819 04:07:40 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:06.753 04:07:41 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:24:06.753 04:07:41 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:07.011 04:07:41 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:07.269 04:07:42 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:24:07.269 04:07:42 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88422 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:07.269 04:07:42 -- host/multipath.sh@65 -- # dtrace_pid=88613 00:24:07.269 04:07:42 -- host/multipath.sh@66 -- # sleep 6 00:24:13.846 04:07:48 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:13.846 04:07:48 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:13.846 04:07:48 -- host/multipath.sh@67 -- # active_port=4421 00:24:13.846 04:07:48 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:13.846 Attaching 4 probes... 00:24:13.846 @path[10.0.0.2, 4421]: 20799 00:24:13.846 @path[10.0.0.2, 4421]: 21379 00:24:13.846 @path[10.0.0.2, 4421]: 21245 00:24:13.846 @path[10.0.0.2, 4421]: 21411 00:24:13.846 @path[10.0.0.2, 4421]: 21336 00:24:13.846 04:07:48 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:13.846 04:07:48 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:13.846 04:07:48 -- host/multipath.sh@69 -- # sed -n 1p 00:24:13.846 04:07:48 -- host/multipath.sh@69 -- # port=4421 00:24:13.846 04:07:48 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:13.846 04:07:48 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:13.846 04:07:48 -- host/multipath.sh@72 -- # kill 88613 00:24:13.847 04:07:48 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:13.847 04:07:48 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:24:13.847 04:07:48 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:13.847 04:07:48 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:14.137 04:07:49 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:24:14.137 04:07:49 -- host/multipath.sh@65 -- # dtrace_pid=88745 00:24:14.137 04:07:49 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88422 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:14.137 04:07:49 -- host/multipath.sh@66 -- # sleep 6 00:24:20.701 04:07:55 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:20.701 04:07:55 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:24:20.701 04:07:55 -- host/multipath.sh@67 -- # active_port=4420 00:24:20.701 04:07:55 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:20.701 Attaching 4 probes... 00:24:20.701 @path[10.0.0.2, 4420]: 22208 00:24:20.701 @path[10.0.0.2, 4420]: 22629 00:24:20.701 @path[10.0.0.2, 4420]: 22628 00:24:20.701 @path[10.0.0.2, 4420]: 22583 00:24:20.701 @path[10.0.0.2, 4420]: 22654 00:24:20.701 04:07:55 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:20.701 04:07:55 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:20.701 04:07:55 -- host/multipath.sh@69 -- # sed -n 1p 00:24:20.701 04:07:55 -- host/multipath.sh@69 -- # port=4420 00:24:20.701 04:07:55 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:24:20.701 04:07:55 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:24:20.701 04:07:55 -- host/multipath.sh@72 -- # kill 88745 00:24:20.701 04:07:55 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:20.701 04:07:55 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:24:20.701 04:07:55 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:20.701 04:07:55 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:20.960 04:07:55 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:24:20.960 04:07:55 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88422 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:20.960 04:07:55 -- host/multipath.sh@65 -- # dtrace_pid=88883 00:24:20.960 04:07:55 -- host/multipath.sh@66 -- # sleep 6 00:24:27.525 04:08:01 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:27.525 04:08:01 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:27.525 04:08:02 -- host/multipath.sh@67 -- # active_port=4421 00:24:27.525 04:08:02 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:27.525 Attaching 4 probes... 00:24:27.525 @path[10.0.0.2, 4421]: 15399 00:24:27.525 @path[10.0.0.2, 4421]: 20980 00:24:27.525 @path[10.0.0.2, 4421]: 21049 00:24:27.525 @path[10.0.0.2, 4421]: 21070 00:24:27.525 @path[10.0.0.2, 4421]: 21188 00:24:27.525 04:08:02 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:27.525 04:08:02 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:27.525 04:08:02 -- host/multipath.sh@69 -- # sed -n 1p 00:24:27.525 04:08:02 -- host/multipath.sh@69 -- # port=4421 00:24:27.525 04:08:02 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:27.525 04:08:02 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:27.525 04:08:02 -- host/multipath.sh@72 -- # kill 88883 00:24:27.525 04:08:02 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:27.525 04:08:02 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:24:27.525 04:08:02 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:27.525 04:08:02 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:27.525 04:08:02 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:24:27.525 04:08:02 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88422 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:27.525 04:08:02 -- host/multipath.sh@65 -- # dtrace_pid=89008 00:24:27.525 04:08:02 -- host/multipath.sh@66 -- # sleep 6 00:24:34.089 04:08:08 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:34.089 04:08:08 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:24:34.089 04:08:08 -- host/multipath.sh@67 -- # active_port= 00:24:34.089 04:08:08 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:34.089 Attaching 4 probes... 00:24:34.089 00:24:34.089 00:24:34.089 00:24:34.089 00:24:34.089 00:24:34.089 04:08:08 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:34.089 04:08:08 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:34.089 04:08:08 -- host/multipath.sh@69 -- # sed -n 1p 00:24:34.089 04:08:08 -- host/multipath.sh@69 -- # port= 00:24:34.089 04:08:08 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:24:34.089 04:08:08 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:24:34.089 04:08:08 -- host/multipath.sh@72 -- # kill 89008 00:24:34.089 04:08:08 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:34.089 04:08:08 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:24:34.089 04:08:08 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:34.089 04:08:09 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:34.348 04:08:09 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:24:34.348 04:08:09 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88422 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:34.348 04:08:09 -- host/multipath.sh@65 -- # dtrace_pid=89144 00:24:34.348 04:08:09 -- host/multipath.sh@66 -- # sleep 6 00:24:40.913 04:08:15 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:40.913 04:08:15 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:40.913 04:08:15 -- host/multipath.sh@67 -- # active_port=4421 00:24:40.913 04:08:15 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:40.913 Attaching 4 probes... 00:24:40.913 @path[10.0.0.2, 4421]: 20369 00:24:40.913 @path[10.0.0.2, 4421]: 20757 00:24:40.913 @path[10.0.0.2, 4421]: 20682 00:24:40.913 @path[10.0.0.2, 4421]: 20741 00:24:40.913 @path[10.0.0.2, 4421]: 20917 00:24:40.913 04:08:15 -- host/multipath.sh@69 -- # sed -n 1p 00:24:40.913 04:08:15 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:40.913 04:08:15 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:40.913 04:08:15 -- host/multipath.sh@69 -- # port=4421 00:24:40.913 04:08:15 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:40.913 04:08:15 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:40.913 04:08:15 -- host/multipath.sh@72 -- # kill 89144 00:24:40.913 04:08:15 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:40.913 04:08:15 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:40.913 [2024-11-08 04:08:15.991786] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.913 [2024-11-08 04:08:15.991868] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.913 [2024-11-08 04:08:15.991894] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.913 [2024-11-08 04:08:15.991902] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.913 [2024-11-08 04:08:15.991910] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.913 [2024-11-08 04:08:15.991918] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.913 [2024-11-08 04:08:15.991926] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.913 [2024-11-08 04:08:15.991935] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.913 [2024-11-08 04:08:15.991942] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.913 [2024-11-08 04:08:15.991952] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.913 [2024-11-08 04:08:15.991960] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.913 [2024-11-08 04:08:15.991968] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.913 [2024-11-08 04:08:15.991975] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.913 [2024-11-08 04:08:15.991983] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.913 [2024-11-08 04:08:15.991991] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.913 [2024-11-08 04:08:15.991999] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.913 [2024-11-08 04:08:15.992006] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.913 [2024-11-08 04:08:15.992014] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.913 [2024-11-08 04:08:15.992022] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.913 [2024-11-08 04:08:15.992030] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992037] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992045] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992052] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992059] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992067] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992075] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992082] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992090] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992098] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992106] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992114] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992121] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992144] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992168] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992191] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992198] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992206] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992214] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992221] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992228] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992237] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992245] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992253] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992261] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992269] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992277] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992285] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992294] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992302] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992309] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992317] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992325] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992333] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992341] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992350] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992358] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992366] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992374] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992381] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992390] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992398] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 [2024-11-08 04:08:15.992406] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11800 is same with the state(5) to be set 00:24:40.914 04:08:16 -- host/multipath.sh@101 -- # sleep 1 00:24:42.291 04:08:17 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:24:42.291 04:08:17 -- host/multipath.sh@65 -- # dtrace_pid=89274 00:24:42.291 04:08:17 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88422 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:42.291 04:08:17 -- host/multipath.sh@66 -- # sleep 6 00:24:48.881 04:08:23 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:48.881 04:08:23 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:24:48.881 04:08:23 -- host/multipath.sh@67 -- # active_port=4420 00:24:48.881 04:08:23 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:48.881 Attaching 4 probes... 00:24:48.881 @path[10.0.0.2, 4420]: 21643 00:24:48.881 @path[10.0.0.2, 4420]: 21994 00:24:48.881 @path[10.0.0.2, 4420]: 21606 00:24:48.881 @path[10.0.0.2, 4420]: 21968 00:24:48.881 @path[10.0.0.2, 4420]: 21892 00:24:48.881 04:08:23 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:48.881 04:08:23 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:48.881 04:08:23 -- host/multipath.sh@69 -- # sed -n 1p 00:24:48.881 04:08:23 -- host/multipath.sh@69 -- # port=4420 00:24:48.881 04:08:23 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:24:48.881 04:08:23 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:24:48.881 04:08:23 -- host/multipath.sh@72 -- # kill 89274 00:24:48.881 04:08:23 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:48.881 04:08:23 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:48.881 [2024-11-08 04:08:23.502670] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:48.881 04:08:23 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:48.881 04:08:23 -- host/multipath.sh@111 -- # sleep 6 00:24:55.446 04:08:29 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:24:55.446 04:08:29 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88422 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:55.446 04:08:29 -- host/multipath.sh@65 -- # dtrace_pid=89472 00:24:55.446 04:08:29 -- host/multipath.sh@66 -- # sleep 6 00:25:00.716 04:08:35 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:00.716 04:08:35 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:00.974 04:08:36 -- host/multipath.sh@67 -- # active_port=4421 00:25:00.975 04:08:36 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:00.975 Attaching 4 probes... 00:25:00.975 @path[10.0.0.2, 4421]: 19888 00:25:00.975 @path[10.0.0.2, 4421]: 20177 00:25:00.975 @path[10.0.0.2, 4421]: 20155 00:25:00.975 @path[10.0.0.2, 4421]: 20443 00:25:00.975 @path[10.0.0.2, 4421]: 20469 00:25:00.975 04:08:36 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:25:00.975 04:08:36 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:00.975 04:08:36 -- host/multipath.sh@69 -- # sed -n 1p 00:25:00.975 04:08:36 -- host/multipath.sh@69 -- # port=4421 00:25:00.975 04:08:36 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:00.975 04:08:36 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:00.975 04:08:36 -- host/multipath.sh@72 -- # kill 89472 00:25:00.975 04:08:36 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:00.975 04:08:36 -- host/multipath.sh@114 -- # killprocess 88526 00:25:00.975 04:08:36 -- common/autotest_common.sh@936 -- # '[' -z 88526 ']' 00:25:00.975 04:08:36 -- common/autotest_common.sh@940 -- # kill -0 88526 00:25:01.233 04:08:36 -- common/autotest_common.sh@941 -- # uname 00:25:01.233 04:08:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:01.233 04:08:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88526 00:25:01.233 killing process with pid 88526 00:25:01.233 04:08:36 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:01.233 04:08:36 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:01.233 04:08:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88526' 00:25:01.233 04:08:36 -- common/autotest_common.sh@955 -- # kill 88526 00:25:01.233 04:08:36 -- common/autotest_common.sh@960 -- # wait 88526 00:25:01.233 Connection closed with partial response: 00:25:01.233 00:25:01.233 00:25:01.502 04:08:36 -- host/multipath.sh@116 -- # wait 88526 00:25:01.502 04:08:36 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:01.502 [2024-11-08 04:07:38.712500] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:01.502 [2024-11-08 04:07:38.712583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88526 ] 00:25:01.502 [2024-11-08 04:07:38.840639] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.502 [2024-11-08 04:07:38.929349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:01.502 Running I/O for 90 seconds... 00:25:01.502 [2024-11-08 04:07:49.021374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:32144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.502 [2024-11-08 04:07:49.021470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:01.502 [2024-11-08 04:07:49.021530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:32152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.502 [2024-11-08 04:07:49.021555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:01.502 [2024-11-08 04:07:49.021580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.502 [2024-11-08 04:07:49.021599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:01.502 [2024-11-08 04:07:49.021622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.502 [2024-11-08 04:07:49.021639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:01.502 [2024-11-08 04:07:49.021660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.502 [2024-11-08 04:07:49.021677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:01.502 [2024-11-08 04:07:49.021700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:32184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.503 [2024-11-08 04:07:49.021716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.021739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:32192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.503 [2024-11-08 04:07:49.021756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.021777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:32200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.503 [2024-11-08 04:07:49.021796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.021824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.503 [2024-11-08 04:07:49.021841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.021876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:31504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.503 [2024-11-08 04:07:49.021893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.021915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.503 [2024-11-08 04:07:49.021959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.021985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:31552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.503 [2024-11-08 04:07:49.022002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.022024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.503 [2024-11-08 04:07:49.022040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.022061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.503 [2024-11-08 04:07:49.022077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.022099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.503 [2024-11-08 04:07:49.022115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.022136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.503 [2024-11-08 04:07:49.022152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.022175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:32208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.503 [2024-11-08 04:07:49.022191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.022220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.503 [2024-11-08 04:07:49.022239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.022260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.503 [2024-11-08 04:07:49.022277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.022299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.503 [2024-11-08 04:07:49.022316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.022338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:32240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.503 [2024-11-08 04:07:49.022355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.022378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:32248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.503 [2024-11-08 04:07:49.022395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.022431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:31600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.503 [2024-11-08 04:07:49.022452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.022486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.503 [2024-11-08 04:07:49.022505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.022528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.503 [2024-11-08 04:07:49.022545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.022567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:31640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.503 [2024-11-08 04:07:49.022584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.022605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:31656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.503 [2024-11-08 04:07:49.022622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.022643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:31664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.503 [2024-11-08 04:07:49.022659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.022682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:31736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.503 [2024-11-08 04:07:49.022698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.022720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:31768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.503 [2024-11-08 04:07:49.022736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.022758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:32256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.503 [2024-11-08 04:07:49.022774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.022796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.503 [2024-11-08 04:07:49.022814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.022835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:32272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.503 [2024-11-08 04:07:49.022852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.023673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:32280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.503 [2024-11-08 04:07:49.023704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.023733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:32288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.503 [2024-11-08 04:07:49.023752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.023799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:32296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.503 [2024-11-08 04:07:49.023819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.023844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:32304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.503 [2024-11-08 04:07:49.023861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.023883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.503 [2024-11-08 04:07:49.023898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.023921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:32320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.503 [2024-11-08 04:07:49.023939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.023960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:32328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.503 [2024-11-08 04:07:49.023977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.023999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.503 [2024-11-08 04:07:49.024017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.024039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:32344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.503 [2024-11-08 04:07:49.024055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.024077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.503 [2024-11-08 04:07:49.024093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.024115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.503 [2024-11-08 04:07:49.024131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:01.503 [2024-11-08 04:07:49.024152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.504 [2024-11-08 04:07:49.024169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.024191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:31792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.504 [2024-11-08 04:07:49.024206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.024229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.504 [2024-11-08 04:07:49.024245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.024268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:31824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.504 [2024-11-08 04:07:49.024293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.024316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.504 [2024-11-08 04:07:49.024333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.024362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:31904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.504 [2024-11-08 04:07:49.024379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.024401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.504 [2024-11-08 04:07:49.024439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.024472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.504 [2024-11-08 04:07:49.024489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.024512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.504 [2024-11-08 04:07:49.024528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.024550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.504 [2024-11-08 04:07:49.024567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.024588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:32384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.504 [2024-11-08 04:07:49.024604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.024628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:32392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.504 [2024-11-08 04:07:49.024644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.024666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:32400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.504 [2024-11-08 04:07:49.024682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.024703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:32408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.504 [2024-11-08 04:07:49.024720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.024742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:32416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.504 [2024-11-08 04:07:49.024758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.024779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:32424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.504 [2024-11-08 04:07:49.024806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.024830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:32432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.504 [2024-11-08 04:07:49.024847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.024868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:32440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.504 [2024-11-08 04:07:49.024884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.024906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.504 [2024-11-08 04:07:49.024924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.024946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:32456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.504 [2024-11-08 04:07:49.024962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.024985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:32464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.504 [2024-11-08 04:07:49.025001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.025029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.504 [2024-11-08 04:07:49.025046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.025069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.504 [2024-11-08 04:07:49.025086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.025108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.504 [2024-11-08 04:07:49.025124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.025145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.504 [2024-11-08 04:07:49.025163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.025185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.504 [2024-11-08 04:07:49.025201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.025223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:32512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.504 [2024-11-08 04:07:49.025240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.025262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.504 [2024-11-08 04:07:49.025278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.025313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:32528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.504 [2024-11-08 04:07:49.025332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.025353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:32536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.504 [2024-11-08 04:07:49.025370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.025392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:32544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.504 [2024-11-08 04:07:49.025408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.025448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.504 [2024-11-08 04:07:49.025466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.025488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.504 [2024-11-08 04:07:49.025516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.025538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:32000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.504 [2024-11-08 04:07:49.025556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.025579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:32008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.504 [2024-11-08 04:07:49.025596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.025618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:32040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.504 [2024-11-08 04:07:49.025634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.025656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:32064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.504 [2024-11-08 04:07:49.025672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.025695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:32096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.504 [2024-11-08 04:07:49.025711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.025733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:32112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.504 [2024-11-08 04:07:49.025750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:01.504 [2024-11-08 04:07:49.025771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:32552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.505 [2024-11-08 04:07:49.025786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:49.025819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.505 [2024-11-08 04:07:49.025837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:49.025867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:32568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.505 [2024-11-08 04:07:49.025883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:49.025904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:32576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.505 [2024-11-08 04:07:49.025923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:49.025946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:32584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.505 [2024-11-08 04:07:49.025962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:49.025983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:32592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.505 [2024-11-08 04:07:49.026000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:49.026022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.505 [2024-11-08 04:07:49.026038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:49.026060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:32608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.505 [2024-11-08 04:07:49.026076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:49.026099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.505 [2024-11-08 04:07:49.026115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:49.026136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:32624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.505 [2024-11-08 04:07:49.026152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:49.026176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:32632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.505 [2024-11-08 04:07:49.026193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:49.026215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:32640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.505 [2024-11-08 04:07:49.026232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:49.026255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.505 [2024-11-08 04:07:49.026272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:49.026294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:32656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.505 [2024-11-08 04:07:49.026319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:49.026343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.505 [2024-11-08 04:07:49.026360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:49.026383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.505 [2024-11-08 04:07:49.026399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:49.027054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:32680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.505 [2024-11-08 04:07:49.027083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:49.027111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.505 [2024-11-08 04:07:49.027130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:49.027152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:32696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.505 [2024-11-08 04:07:49.027169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:49.027191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:32704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.505 [2024-11-08 04:07:49.027207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:49.027229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:32712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.505 [2024-11-08 04:07:49.027245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:49.027267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:32720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.505 [2024-11-08 04:07:49.027283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:49.027306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:32728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.505 [2024-11-08 04:07:49.027324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:49.027346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:32736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.505 [2024-11-08 04:07:49.027362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:49.027384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:32744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.505 [2024-11-08 04:07:49.027401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:49.027439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:32752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.505 [2024-11-08 04:07:49.027483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:49.027507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:32760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.505 [2024-11-08 04:07:49.027525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:49.027548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:32768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.505 [2024-11-08 04:07:49.027564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:49.027586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.505 [2024-11-08 04:07:49.027604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:49.027627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.505 [2024-11-08 04:07:49.027643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:49.027665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:32792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.505 [2024-11-08 04:07:49.027682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:49.027704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.505 [2024-11-08 04:07:49.027720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:49.027742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:32808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.505 [2024-11-08 04:07:49.027759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:55.585389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.505 [2024-11-08 04:07:55.586141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:55.586319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.505 [2024-11-08 04:07:55.586492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:55.586602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.505 [2024-11-08 04:07:55.586701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:55.586800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.505 [2024-11-08 04:07:55.586895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:55.586991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.505 [2024-11-08 04:07:55.587076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:55.587203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.505 [2024-11-08 04:07:55.587309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:01.505 [2024-11-08 04:07:55.587404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:35064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.506 [2024-11-08 04:07:55.587556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.587664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:35072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.506 [2024-11-08 04:07:55.587758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.587853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:35088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.506 [2024-11-08 04:07:55.587939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.588041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:35104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.506 [2024-11-08 04:07:55.588129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.588231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.506 [2024-11-08 04:07:55.588320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.588427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:35152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.506 [2024-11-08 04:07:55.588558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.588660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:35160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.506 [2024-11-08 04:07:55.588752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.588850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:35728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.506 [2024-11-08 04:07:55.588938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.589046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.506 [2024-11-08 04:07:55.589138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.589233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:35744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.506 [2024-11-08 04:07:55.589315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.589407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.506 [2024-11-08 04:07:55.589569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.589689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.506 [2024-11-08 04:07:55.589780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.589876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:35768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.506 [2024-11-08 04:07:55.589968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.590065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.506 [2024-11-08 04:07:55.590158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.590622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.506 [2024-11-08 04:07:55.590755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.590865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:35792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.506 [2024-11-08 04:07:55.590971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.591072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:35800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.506 [2024-11-08 04:07:55.591164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.591259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:35808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.506 [2024-11-08 04:07:55.591288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.591314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:35816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.506 [2024-11-08 04:07:55.591331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.591354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:35176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.506 [2024-11-08 04:07:55.591370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.591395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:35184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.506 [2024-11-08 04:07:55.591436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.591465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:35200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.506 [2024-11-08 04:07:55.591483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.591506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:35208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.506 [2024-11-08 04:07:55.591522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.591560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:35216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.506 [2024-11-08 04:07:55.591579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.591602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:35232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.506 [2024-11-08 04:07:55.591618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.591641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.506 [2024-11-08 04:07:55.591657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.591680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:35256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.506 [2024-11-08 04:07:55.591696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.591719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:35824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.506 [2024-11-08 04:07:55.591735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.591759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.506 [2024-11-08 04:07:55.591775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.591798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.506 [2024-11-08 04:07:55.591813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.591837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:35848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.506 [2024-11-08 04:07:55.591853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.591876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:35856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.506 [2024-11-08 04:07:55.591892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.591915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:35864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.506 [2024-11-08 04:07:55.591948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.591973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:35872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.506 [2024-11-08 04:07:55.591990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.592016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.506 [2024-11-08 04:07:55.592034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.592060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:35888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.506 [2024-11-08 04:07:55.592087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.592113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:35896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.506 [2024-11-08 04:07:55.592131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.592156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:35904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.506 [2024-11-08 04:07:55.592174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.592200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.506 [2024-11-08 04:07:55.592218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:01.506 [2024-11-08 04:07:55.592243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:35920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.507 [2024-11-08 04:07:55.592274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:01.507 [2024-11-08 04:07:55.592298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.507 [2024-11-08 04:07:55.592315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:01.507 [2024-11-08 04:07:55.592338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.507 [2024-11-08 04:07:55.592354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:01.507 [2024-11-08 04:07:55.592377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:35944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.507 [2024-11-08 04:07:55.592394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:01.507 [2024-11-08 04:07:55.592418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:35952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.507 [2024-11-08 04:07:55.592454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:01.507 [2024-11-08 04:07:55.592482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:35960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.507 [2024-11-08 04:07:55.592500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:01.507 [2024-11-08 04:07:55.592526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.507 [2024-11-08 04:07:55.592543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:01.507 [2024-11-08 04:07:55.592568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:35272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.507 [2024-11-08 04:07:55.592586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:01.507 [2024-11-08 04:07:55.592610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:35280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.507 [2024-11-08 04:07:55.592635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:01.507 [2024-11-08 04:07:55.592663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:35288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.507 [2024-11-08 04:07:55.592682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:01.507 [2024-11-08 04:07:55.592708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.507 [2024-11-08 04:07:55.592726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:01.507 [2024-11-08 04:07:55.592749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.507 [2024-11-08 04:07:55.592766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:01.507 [2024-11-08 04:07:55.592791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:35400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.507 [2024-11-08 04:07:55.592807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:01.507 [2024-11-08 04:07:55.592830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:35408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.507 [2024-11-08 04:07:55.592849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:01.507 [2024-11-08 04:07:55.592873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:35416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.507 [2024-11-08 04:07:55.592890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:01.507 [2024-11-08 04:07:55.592913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:35464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.507 [2024-11-08 04:07:55.592931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:01.507 [2024-11-08 04:07:55.592955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.507 [2024-11-08 04:07:55.592972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:01.507 [2024-11-08 04:07:55.592995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:35480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.507 [2024-11-08 04:07:55.593023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:01.507 [2024-11-08 04:07:55.593047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:35496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.507 [2024-11-08 04:07:55.593063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:01.507 [2024-11-08 04:07:55.593086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:35504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.507 [2024-11-08 04:07:55.593103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:01.507 [2024-11-08 04:07:55.593127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:35544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.507 [2024-11-08 04:07:55.593144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:01.507 [2024-11-08 04:07:55.593177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.507 [2024-11-08 04:07:55.593195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:01.507 [2024-11-08 04:07:55.593218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:35584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.507 [2024-11-08 04:07:55.593235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:01.507 [2024-11-08 04:07:55.593260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.507 [2024-11-08 04:07:55.593277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:01.507 [2024-11-08 04:07:55.593300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:35984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.507 [2024-11-08 04:07:55.593317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:01.507 [2024-11-08 04:07:55.593340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:35992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.507 [2024-11-08 04:07:55.593358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:01.507 [2024-11-08 04:07:55.593382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:36000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.507 [2024-11-08 04:07:55.593398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:01.507 [2024-11-08 04:07:55.593436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:36008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.507 [2024-11-08 04:07:55.593456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:01.507 [2024-11-08 04:07:55.593482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:36016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.507 [2024-11-08 04:07:55.593509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:01.507 [2024-11-08 04:07:55.593535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:36024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.507 [2024-11-08 04:07:55.593555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:01.507 [2024-11-08 04:07:55.593579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:36032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.507 [2024-11-08 04:07:55.593597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:01.507 [2024-11-08 04:07:55.593620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:36040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.507 [2024-11-08 04:07:55.593637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:01.507 [2024-11-08 04:07:55.593662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:36048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.507 [2024-11-08 04:07:55.593680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:07:55.593717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:36056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.508 [2024-11-08 04:07:55.593735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:07:55.593760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:36064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.508 [2024-11-08 04:07:55.593777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:07:55.593800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:36072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.508 [2024-11-08 04:07:55.593817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:07:55.593840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:36080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.508 [2024-11-08 04:07:55.593869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:07:55.593905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:36088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.508 [2024-11-08 04:07:55.593926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:07:55.593951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:36096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.508 [2024-11-08 04:07:55.593968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:07:55.593993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.508 [2024-11-08 04:07:55.594009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:07:55.594032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.508 [2024-11-08 04:07:55.594048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:07:55.594072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:36120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.508 [2024-11-08 04:07:55.594090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:07:55.594114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:36128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.508 [2024-11-08 04:07:55.594131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:07:55.594154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.508 [2024-11-08 04:07:55.594172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:07:55.594195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:36144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.508 [2024-11-08 04:07:55.594214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:07:55.594238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:36152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.508 [2024-11-08 04:07:55.594263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:07:55.594288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:36160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.508 [2024-11-08 04:07:55.594313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:07:55.594339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:36168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.508 [2024-11-08 04:07:55.594356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:07:55.594379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:36176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.508 [2024-11-08 04:07:55.594396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:07:55.594444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:36184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.508 [2024-11-08 04:07:55.594466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:07:55.594491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.508 [2024-11-08 04:07:55.594509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:07:55.594534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:36200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.508 [2024-11-08 04:07:55.594558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:07:55.594582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:36208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.508 [2024-11-08 04:07:55.594598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:07:55.594622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.508 [2024-11-08 04:07:55.594638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:07:55.594662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.508 [2024-11-08 04:07:55.594679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:07:55.594703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:36232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.508 [2024-11-08 04:07:55.594720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:07:55.594744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:36240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.508 [2024-11-08 04:07:55.594761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:07:55.594784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:36248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.508 [2024-11-08 04:07:55.594822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:07:55.594847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.508 [2024-11-08 04:07:55.594867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:07:55.594891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:36264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.508 [2024-11-08 04:07:55.594909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:07:55.594932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:36272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.508 [2024-11-08 04:07:55.594949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:07:55.594974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:36280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.508 [2024-11-08 04:07:55.594991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:07:55.595014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.508 [2024-11-08 04:07:55.595038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:07:55.595320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:35592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.508 [2024-11-08 04:07:55.595349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:07:55.595385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:35600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.508 [2024-11-08 04:07:55.595404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:07:55.595454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:35608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.508 [2024-11-08 04:07:55.595473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:07:55.595502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.508 [2024-11-08 04:07:55.595520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:07:55.595548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:35624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.508 [2024-11-08 04:07:55.595565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:07:55.595593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:35640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.508 [2024-11-08 04:07:55.595611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:07:55.595640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:35648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.508 [2024-11-08 04:07:55.595656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:07:55.595700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:35656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.508 [2024-11-08 04:07:55.595719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:08:02.574027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:48624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.508 [2024-11-08 04:08:02.574106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:01.508 [2024-11-08 04:08:02.574156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:48632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.509 [2024-11-08 04:08:02.574179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.574204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:48640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.509 [2024-11-08 04:08:02.574221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.574244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:48648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.509 [2024-11-08 04:08:02.574261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.574283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:48656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.509 [2024-11-08 04:08:02.574299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.574321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:48664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.509 [2024-11-08 04:08:02.574337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.574360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:48672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.509 [2024-11-08 04:08:02.574377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.574399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:48680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.509 [2024-11-08 04:08:02.574437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.574470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:48688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.509 [2024-11-08 04:08:02.574489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.574512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.509 [2024-11-08 04:08:02.574528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.574550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:48704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.509 [2024-11-08 04:08:02.574567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.574628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:48712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.509 [2024-11-08 04:08:02.574648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.574669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.509 [2024-11-08 04:08:02.574685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.574707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:48728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.509 [2024-11-08 04:08:02.574723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.574745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:48736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.509 [2024-11-08 04:08:02.574761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.574782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:48744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.509 [2024-11-08 04:08:02.574803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.574836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:48752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.509 [2024-11-08 04:08:02.574854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.574889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:48760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.509 [2024-11-08 04:08:02.574906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.574927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:48768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.509 [2024-11-08 04:08:02.574942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.574964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:48776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.509 [2024-11-08 04:08:02.574980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.575001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:48784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.509 [2024-11-08 04:08:02.575017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.575038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:48792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.509 [2024-11-08 04:08:02.575054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.575203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:48800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.509 [2024-11-08 04:08:02.575230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.575259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:48808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.509 [2024-11-08 04:08:02.575306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.575333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:48816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.509 [2024-11-08 04:08:02.575351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.575375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:48824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.509 [2024-11-08 04:08:02.575392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.575430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:48832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.509 [2024-11-08 04:08:02.575451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.575475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:48008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.509 [2024-11-08 04:08:02.575492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.575514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:48040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.509 [2024-11-08 04:08:02.575531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.575554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:48048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.509 [2024-11-08 04:08:02.575570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.575593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:48072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.509 [2024-11-08 04:08:02.575610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.575633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:48088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.509 [2024-11-08 04:08:02.575650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.575672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:48128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.509 [2024-11-08 04:08:02.575689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.575713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:48136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.509 [2024-11-08 04:08:02.575730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.575753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:48160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.509 [2024-11-08 04:08:02.575769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.575792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:48168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.509 [2024-11-08 04:08:02.575824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.575849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:48176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.509 [2024-11-08 04:08:02.575867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.575890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:48200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.509 [2024-11-08 04:08:02.575907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.575930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:48240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.509 [2024-11-08 04:08:02.575946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.575968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:48272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.509 [2024-11-08 04:08:02.575985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:01.509 [2024-11-08 04:08:02.576007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:48280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.510 [2024-11-08 04:08:02.576024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.576046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:48296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.510 [2024-11-08 04:08:02.576062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.576086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:48320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.510 [2024-11-08 04:08:02.576102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.576125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:48840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.510 [2024-11-08 04:08:02.576140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.576164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:48848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.510 [2024-11-08 04:08:02.576180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.576203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:48856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.510 [2024-11-08 04:08:02.576219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.576242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:48864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.510 [2024-11-08 04:08:02.576258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.576281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:48872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.510 [2024-11-08 04:08:02.576305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.576330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:48880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.510 [2024-11-08 04:08:02.576347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.576371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:48888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.510 [2024-11-08 04:08:02.576388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.576411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:48896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.510 [2024-11-08 04:08:02.576446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.576470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:48904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.510 [2024-11-08 04:08:02.576488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.576511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:48912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.510 [2024-11-08 04:08:02.576527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.576550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:48920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.510 [2024-11-08 04:08:02.576567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.576590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:48928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.510 [2024-11-08 04:08:02.576607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.576629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:48936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.510 [2024-11-08 04:08:02.576646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.576669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:48944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.510 [2024-11-08 04:08:02.576685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.576707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:48952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.510 [2024-11-08 04:08:02.576723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.576747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:48960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.510 [2024-11-08 04:08:02.576762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.576786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:48968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.510 [2024-11-08 04:08:02.576809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.576852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:48976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.510 [2024-11-08 04:08:02.576870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.576894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:48984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.510 [2024-11-08 04:08:02.576912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.576934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:48992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.510 [2024-11-08 04:08:02.576951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.576974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:49000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.510 [2024-11-08 04:08:02.576991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.577014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:49008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.510 [2024-11-08 04:08:02.577030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.577053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:49016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.510 [2024-11-08 04:08:02.577069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.577092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:49024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.510 [2024-11-08 04:08:02.577109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.577134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.510 [2024-11-08 04:08:02.577149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.577172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.510 [2024-11-08 04:08:02.577190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.577213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:49048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.510 [2024-11-08 04:08:02.577229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.577252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:49056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.510 [2024-11-08 04:08:02.577269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.577484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:49064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.510 [2024-11-08 04:08:02.577547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.577592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.510 [2024-11-08 04:08:02.577614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.577641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.510 [2024-11-08 04:08:02.577660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.577688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:49088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.510 [2024-11-08 04:08:02.577706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.577733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.510 [2024-11-08 04:08:02.577751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.577780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:48336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.510 [2024-11-08 04:08:02.577804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.577831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:48352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.510 [2024-11-08 04:08:02.577849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.577876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:48360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.510 [2024-11-08 04:08:02.577893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:01.510 [2024-11-08 04:08:02.577935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:48376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.511 [2024-11-08 04:08:02.577952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:01.511 [2024-11-08 04:08:02.577977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:48384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.511 [2024-11-08 04:08:02.577993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:01.511 [2024-11-08 04:08:02.578020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:48408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.511 [2024-11-08 04:08:02.578036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:01.511 [2024-11-08 04:08:02.578062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:48424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.511 [2024-11-08 04:08:02.578079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:01.511 [2024-11-08 04:08:02.578104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:48440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.511 [2024-11-08 04:08:02.578121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:01.511 [2024-11-08 04:08:02.578146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:48448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.511 [2024-11-08 04:08:02.578173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:01.511 [2024-11-08 04:08:02.578201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:48472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.511 [2024-11-08 04:08:02.578218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:01.511 [2024-11-08 04:08:02.578244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:48488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.511 [2024-11-08 04:08:02.578261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:01.511 [2024-11-08 04:08:02.578286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:48512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.511 [2024-11-08 04:08:02.578304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:01.511 [2024-11-08 04:08:02.578330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:48520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.511 [2024-11-08 04:08:02.578348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:01.511 [2024-11-08 04:08:02.578392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:48528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.511 [2024-11-08 04:08:02.578411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:01.511 [2024-11-08 04:08:02.578438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:48584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.511 [2024-11-08 04:08:02.578456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:01.511 [2024-11-08 04:08:02.578501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:48608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.511 [2024-11-08 04:08:02.578520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:01.511 [2024-11-08 04:08:02.578547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.511 [2024-11-08 04:08:02.578566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:01.511 [2024-11-08 04:08:02.578602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:49112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.511 [2024-11-08 04:08:02.578621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:01.511 [2024-11-08 04:08:02.578648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:49120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.511 [2024-11-08 04:08:02.578666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:01.511 [2024-11-08 04:08:02.578692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:49128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.511 [2024-11-08 04:08:02.578709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:01.511 [2024-11-08 04:08:02.578753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.511 [2024-11-08 04:08:02.578779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:01.511 [2024-11-08 04:08:02.578806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.511 [2024-11-08 04:08:02.578824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:01.511 [2024-11-08 04:08:02.578850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:49152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.511 [2024-11-08 04:08:02.578866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:01.511 [2024-11-08 04:08:02.578893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.511 [2024-11-08 04:08:02.578910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:01.511 [2024-11-08 04:08:02.578936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:49168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.511 [2024-11-08 04:08:02.578954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:01.511 [2024-11-08 04:08:02.578980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:49176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.511 [2024-11-08 04:08:02.578997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:01.511 [2024-11-08 04:08:02.579023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.511 [2024-11-08 04:08:02.579040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:01.511 [2024-11-08 04:08:02.579066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:49192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.511 [2024-11-08 04:08:02.579083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:01.511 [2024-11-08 04:08:02.579108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.511 [2024-11-08 04:08:02.579126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:01.511 [2024-11-08 04:08:02.579152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.511 [2024-11-08 04:08:02.579168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:01.511 [2024-11-08 04:08:02.579193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.511 [2024-11-08 04:08:02.579210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:01.511 [2024-11-08 04:08:02.579236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.511 [2024-11-08 04:08:02.579253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:01.511 [2024-11-08 04:08:02.579279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:49232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.511 [2024-11-08 04:08:02.579296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:01.511 [2024-11-08 04:08:02.579336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:49240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.511 [2024-11-08 04:08:02.579354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:01.511 [2024-11-08 04:08:02.579380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.511 [2024-11-08 04:08:02.579398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:01.511 [2024-11-08 04:08:02.579425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.511 [2024-11-08 04:08:02.579464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:01.511 [2024-11-08 04:08:02.579491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.511 [2024-11-08 04:08:02.579508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:01.511 [2024-11-08 04:08:02.579535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.512 [2024-11-08 04:08:02.579552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:02.579578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:49280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.512 [2024-11-08 04:08:02.579596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:02.579621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:49288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.512 [2024-11-08 04:08:02.579639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:02.579664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.512 [2024-11-08 04:08:02.579681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:02.579707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.512 [2024-11-08 04:08:02.579724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:02.579750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:49312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.512 [2024-11-08 04:08:02.579768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:02.579793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:49320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.512 [2024-11-08 04:08:02.579810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:02.579844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.512 [2024-11-08 04:08:02.579863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:02.579900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.512 [2024-11-08 04:08:02.579918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:02.579944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.512 [2024-11-08 04:08:02.579960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:02.579986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.512 [2024-11-08 04:08:02.580004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:02.580031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.512 [2024-11-08 04:08:02.580048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:02.580080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:49368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.512 [2024-11-08 04:08:02.580099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:02.580127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.512 [2024-11-08 04:08:02.580145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:02.580170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.512 [2024-11-08 04:08:02.580187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:15.992937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.512 [2024-11-08 04:08:15.993005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:15.993034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.512 [2024-11-08 04:08:15.993050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:15.993074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.512 [2024-11-08 04:08:15.993088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:15.993105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.512 [2024-11-08 04:08:15.993119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:15.993135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.512 [2024-11-08 04:08:15.993151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:15.993166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.512 [2024-11-08 04:08:15.993201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:15.993220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.512 [2024-11-08 04:08:15.993234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:15.993249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.512 [2024-11-08 04:08:15.993262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:15.993278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.512 [2024-11-08 04:08:15.993292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:15.993307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.512 [2024-11-08 04:08:15.993320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:15.993335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.512 [2024-11-08 04:08:15.993350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:15.993365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.512 [2024-11-08 04:08:15.993379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:15.993394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.512 [2024-11-08 04:08:15.993408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:15.993442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.512 [2024-11-08 04:08:15.993459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:15.993475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.512 [2024-11-08 04:08:15.993489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:15.993515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.512 [2024-11-08 04:08:15.993532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:15.993553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.512 [2024-11-08 04:08:15.993569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:15.993585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.512 [2024-11-08 04:08:15.993599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:15.993626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.512 [2024-11-08 04:08:15.993642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:15.993657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.512 [2024-11-08 04:08:15.993671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:15.993687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.512 [2024-11-08 04:08:15.993700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:15.993716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.512 [2024-11-08 04:08:15.993730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:15.993745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.512 [2024-11-08 04:08:15.993759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:15.993775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.512 [2024-11-08 04:08:15.993788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.512 [2024-11-08 04:08:15.993804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.512 [2024-11-08 04:08:15.993818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.993834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.513 [2024-11-08 04:08:15.993847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.993870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.513 [2024-11-08 04:08:15.993896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.993911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.513 [2024-11-08 04:08:15.993925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.993943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.513 [2024-11-08 04:08:15.993957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.993972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.513 [2024-11-08 04:08:15.993986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.994001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.513 [2024-11-08 04:08:15.994022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.994039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.513 [2024-11-08 04:08:15.994053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.994069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.513 [2024-11-08 04:08:15.994083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.994098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.513 [2024-11-08 04:08:15.994112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.994127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.513 [2024-11-08 04:08:15.994142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.994157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.513 [2024-11-08 04:08:15.994172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.994187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.513 [2024-11-08 04:08:15.994201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.994216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.513 [2024-11-08 04:08:15.994231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.994245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.513 [2024-11-08 04:08:15.994259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.994274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.513 [2024-11-08 04:08:15.994288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.994304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.513 [2024-11-08 04:08:15.994317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.994332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.513 [2024-11-08 04:08:15.994346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.994362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.513 [2024-11-08 04:08:15.994376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.994391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.513 [2024-11-08 04:08:15.994425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.994447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.513 [2024-11-08 04:08:15.994462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.994478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.513 [2024-11-08 04:08:15.994492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.994508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.513 [2024-11-08 04:08:15.994521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.994537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.513 [2024-11-08 04:08:15.994550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.994566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.513 [2024-11-08 04:08:15.994580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.994596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.513 [2024-11-08 04:08:15.994609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.994625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.513 [2024-11-08 04:08:15.994640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.994656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.513 [2024-11-08 04:08:15.994669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.994685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.513 [2024-11-08 04:08:15.994699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.994715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.513 [2024-11-08 04:08:15.994729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.994744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.513 [2024-11-08 04:08:15.994758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.994772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.513 [2024-11-08 04:08:15.994786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.994810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.513 [2024-11-08 04:08:15.994825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.994841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.513 [2024-11-08 04:08:15.994855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.994871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.513 [2024-11-08 04:08:15.994896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.994911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.513 [2024-11-08 04:08:15.994925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.994940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.513 [2024-11-08 04:08:15.994955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.994971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.513 [2024-11-08 04:08:15.994984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.995005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.513 [2024-11-08 04:08:15.995018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.995034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.513 [2024-11-08 04:08:15.995048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.995063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.513 [2024-11-08 04:08:15.995077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.513 [2024-11-08 04:08:15.995093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.514 [2024-11-08 04:08:15.995107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.995122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.514 [2024-11-08 04:08:15.995144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.995161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.514 [2024-11-08 04:08:15.995176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.995190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.514 [2024-11-08 04:08:15.995213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.995230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.514 [2024-11-08 04:08:15.995245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.995260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.514 [2024-11-08 04:08:15.995274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.995290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.514 [2024-11-08 04:08:15.995314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.995330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.514 [2024-11-08 04:08:15.995343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.995359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.514 [2024-11-08 04:08:15.995373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.995388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.514 [2024-11-08 04:08:15.995402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.995431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.514 [2024-11-08 04:08:15.995447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.995463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.514 [2024-11-08 04:08:15.995477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.995492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.514 [2024-11-08 04:08:15.995506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.995521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.514 [2024-11-08 04:08:15.995535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.995551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.514 [2024-11-08 04:08:15.995565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.995580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.514 [2024-11-08 04:08:15.995595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.995624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.514 [2024-11-08 04:08:15.995639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.995655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.514 [2024-11-08 04:08:15.995676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.995693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.514 [2024-11-08 04:08:15.995707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.995723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.514 [2024-11-08 04:08:15.995737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.995752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.514 [2024-11-08 04:08:15.995766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.995781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.514 [2024-11-08 04:08:15.995795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.995810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.514 [2024-11-08 04:08:15.995824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.995839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.514 [2024-11-08 04:08:15.995852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.995868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.514 [2024-11-08 04:08:15.995882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.995897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.514 [2024-11-08 04:08:15.995911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.995926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.514 [2024-11-08 04:08:15.995940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.995955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.514 [2024-11-08 04:08:15.995969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.995984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.514 [2024-11-08 04:08:15.996005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.996021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.514 [2024-11-08 04:08:15.996036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.996052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.514 [2024-11-08 04:08:15.996065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.996080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.514 [2024-11-08 04:08:15.996094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.996110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.514 [2024-11-08 04:08:15.996124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.996139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.514 [2024-11-08 04:08:15.996157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.996173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.514 [2024-11-08 04:08:15.996187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.996202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.514 [2024-11-08 04:08:15.996216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.996231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.514 [2024-11-08 04:08:15.996244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.996260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.514 [2024-11-08 04:08:15.996274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.996289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.514 [2024-11-08 04:08:15.996303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.996318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.514 [2024-11-08 04:08:15.996331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.514 [2024-11-08 04:08:15.996347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.515 [2024-11-08 04:08:15.996361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.515 [2024-11-08 04:08:15.996376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.515 [2024-11-08 04:08:15.996397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.515 [2024-11-08 04:08:15.996425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.515 [2024-11-08 04:08:15.996443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.515 [2024-11-08 04:08:15.996459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.515 [2024-11-08 04:08:15.996473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.515 [2024-11-08 04:08:15.996488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.515 [2024-11-08 04:08:15.996502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.515 [2024-11-08 04:08:15.996518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.515 [2024-11-08 04:08:15.996532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.515 [2024-11-08 04:08:15.996547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.515 [2024-11-08 04:08:15.996560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.515 [2024-11-08 04:08:15.996575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.515 [2024-11-08 04:08:15.996589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.515 [2024-11-08 04:08:15.996605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.515 [2024-11-08 04:08:15.996620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.515 [2024-11-08 04:08:15.996635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.515 [2024-11-08 04:08:15.996654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.515 [2024-11-08 04:08:15.996670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.515 [2024-11-08 04:08:15.996685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.515 [2024-11-08 04:08:15.996700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.515 [2024-11-08 04:08:15.996715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.515 [2024-11-08 04:08:15.996730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.515 [2024-11-08 04:08:15.996744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.515 [2024-11-08 04:08:15.996759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.515 [2024-11-08 04:08:15.996773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.515 [2024-11-08 04:08:15.996797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.515 [2024-11-08 04:08:15.996812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.515 [2024-11-08 04:08:15.996827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.515 [2024-11-08 04:08:15.996841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.515 [2024-11-08 04:08:15.996857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.515 [2024-11-08 04:08:15.996870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.515 [2024-11-08 04:08:15.996886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.515 [2024-11-08 04:08:15.996899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.515 [2024-11-08 04:08:15.996915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.515 [2024-11-08 04:08:15.996929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.515 [2024-11-08 04:08:15.996945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.515 [2024-11-08 04:08:15.996958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.515 [2024-11-08 04:08:15.996974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.515 [2024-11-08 04:08:15.996987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.515 [2024-11-08 04:08:15.997004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.515 [2024-11-08 04:08:15.997018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.515 [2024-11-08 04:08:15.997034] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb725b0 is same with the state(5) to be set 00:25:01.515 [2024-11-08 04:08:15.997052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.515 [2024-11-08 04:08:15.997063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.515 [2024-11-08 04:08:15.997074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98632 len:8 PRP1 0x0 PRP2 0x0 00:25:01.515 [2024-11-08 04:08:15.997088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.515 [2024-11-08 04:08:15.997155] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb725b0 was disconnected and freed. reset controller. 00:25:01.515 [2024-11-08 04:08:15.997257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.515 [2024-11-08 04:08:15.997291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.515 [2024-11-08 04:08:15.997310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.515 [2024-11-08 04:08:15.997324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.515 [2024-11-08 04:08:15.997350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.515 [2024-11-08 04:08:15.997365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.515 [2024-11-08 04:08:15.997380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.515 [2024-11-08 04:08:15.997394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.515 [2024-11-08 04:08:15.997408] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16790 is same with the state(5) to be set 00:25:01.515 [2024-11-08 04:08:15.998594] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.515 [2024-11-08 04:08:15.998637] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd16790 (9): Bad file descriptor 00:25:01.515 [2024-11-08 04:08:15.998756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.515 [2024-11-08 04:08:15.998817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.515 [2024-11-08 04:08:15.998843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd16790 with addr=10.0.0.2, port=4421 00:25:01.515 [2024-11-08 04:08:15.998861] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16790 is same with the state(5) to be set 00:25:01.515 [2024-11-08 04:08:15.998888] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd16790 (9): Bad file descriptor 00:25:01.515 [2024-11-08 04:08:15.998913] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.515 [2024-11-08 04:08:15.998930] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.515 [2024-11-08 04:08:15.998946] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.515 [2024-11-08 04:08:15.998972] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.515 [2024-11-08 04:08:15.998990] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.515 [2024-11-08 04:08:26.052540] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:01.515 Received shutdown signal, test time was about 55.379306 seconds 00:25:01.515 00:25:01.515 Latency(us) 00:25:01.515 [2024-11-08T04:08:36.626Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.515 [2024-11-08T04:08:36.626Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:01.515 Verification LBA range: start 0x0 length 0x4000 00:25:01.515 Nvme0n1 : 55.38 12112.14 47.31 0.00 0.00 10551.84 968.15 7015926.69 00:25:01.515 [2024-11-08T04:08:36.626Z] =================================================================================================================== 00:25:01.515 [2024-11-08T04:08:36.626Z] Total : 12112.14 47.31 0.00 0.00 10551.84 968.15 7015926.69 00:25:01.515 04:08:36 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:01.515 04:08:36 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:25:01.515 04:08:36 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:01.515 04:08:36 -- host/multipath.sh@125 -- # nvmftestfini 00:25:01.515 04:08:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:01.515 04:08:36 -- nvmf/common.sh@116 -- # sync 00:25:01.774 04:08:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:01.775 04:08:36 -- nvmf/common.sh@119 -- # set +e 00:25:01.775 04:08:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:01.775 04:08:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:01.775 rmmod nvme_tcp 00:25:01.775 rmmod nvme_fabrics 00:25:01.775 rmmod nvme_keyring 00:25:01.775 04:08:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:01.775 04:08:36 -- nvmf/common.sh@123 -- # set -e 00:25:01.775 04:08:36 -- nvmf/common.sh@124 -- # return 0 00:25:01.775 04:08:36 -- nvmf/common.sh@477 -- # '[' -n 88422 ']' 00:25:01.775 04:08:36 -- nvmf/common.sh@478 -- # killprocess 88422 00:25:01.775 04:08:36 -- common/autotest_common.sh@936 -- # '[' -z 88422 ']' 00:25:01.775 04:08:36 -- common/autotest_common.sh@940 -- # kill -0 88422 00:25:01.775 04:08:36 -- common/autotest_common.sh@941 -- # uname 00:25:01.775 04:08:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:01.775 04:08:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88422 00:25:01.775 killing process with pid 88422 00:25:01.775 04:08:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:01.775 04:08:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:01.775 04:08:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88422' 00:25:01.775 04:08:36 -- common/autotest_common.sh@955 -- # kill 88422 00:25:01.775 04:08:36 -- common/autotest_common.sh@960 -- # wait 88422 00:25:02.033 04:08:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:02.033 04:08:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:02.033 04:08:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:02.033 04:08:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:02.033 04:08:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:02.033 04:08:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.033 04:08:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:02.033 04:08:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.033 04:08:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:02.033 ************************************ 00:25:02.033 END TEST nvmf_multipath 00:25:02.033 ************************************ 00:25:02.033 00:25:02.033 real 1m1.538s 00:25:02.033 user 2m52.504s 00:25:02.033 sys 0m14.494s 00:25:02.033 04:08:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:02.033 04:08:37 -- common/autotest_common.sh@10 -- # set +x 00:25:02.033 04:08:37 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:25:02.033 04:08:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:02.033 04:08:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:02.033 04:08:37 -- common/autotest_common.sh@10 -- # set +x 00:25:02.033 ************************************ 00:25:02.033 START TEST nvmf_timeout 00:25:02.033 ************************************ 00:25:02.033 04:08:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:25:02.293 * Looking for test storage... 00:25:02.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:02.293 04:08:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:02.293 04:08:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:02.293 04:08:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:02.293 04:08:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:02.293 04:08:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:02.293 04:08:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:02.293 04:08:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:02.293 04:08:37 -- scripts/common.sh@335 -- # IFS=.-: 00:25:02.293 04:08:37 -- scripts/common.sh@335 -- # read -ra ver1 00:25:02.293 04:08:37 -- scripts/common.sh@336 -- # IFS=.-: 00:25:02.293 04:08:37 -- scripts/common.sh@336 -- # read -ra ver2 00:25:02.293 04:08:37 -- scripts/common.sh@337 -- # local 'op=<' 00:25:02.293 04:08:37 -- scripts/common.sh@339 -- # ver1_l=2 00:25:02.293 04:08:37 -- scripts/common.sh@340 -- # ver2_l=1 00:25:02.293 04:08:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:02.293 04:08:37 -- scripts/common.sh@343 -- # case "$op" in 00:25:02.293 04:08:37 -- scripts/common.sh@344 -- # : 1 00:25:02.293 04:08:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:02.293 04:08:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:02.293 04:08:37 -- scripts/common.sh@364 -- # decimal 1 00:25:02.293 04:08:37 -- scripts/common.sh@352 -- # local d=1 00:25:02.293 04:08:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:02.293 04:08:37 -- scripts/common.sh@354 -- # echo 1 00:25:02.293 04:08:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:02.293 04:08:37 -- scripts/common.sh@365 -- # decimal 2 00:25:02.293 04:08:37 -- scripts/common.sh@352 -- # local d=2 00:25:02.293 04:08:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:02.293 04:08:37 -- scripts/common.sh@354 -- # echo 2 00:25:02.293 04:08:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:02.293 04:08:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:02.293 04:08:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:02.293 04:08:37 -- scripts/common.sh@367 -- # return 0 00:25:02.293 04:08:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:02.293 04:08:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:02.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.293 --rc genhtml_branch_coverage=1 00:25:02.293 --rc genhtml_function_coverage=1 00:25:02.293 --rc genhtml_legend=1 00:25:02.293 --rc geninfo_all_blocks=1 00:25:02.293 --rc geninfo_unexecuted_blocks=1 00:25:02.293 00:25:02.293 ' 00:25:02.293 04:08:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:02.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.293 --rc genhtml_branch_coverage=1 00:25:02.293 --rc genhtml_function_coverage=1 00:25:02.293 --rc genhtml_legend=1 00:25:02.293 --rc geninfo_all_blocks=1 00:25:02.293 --rc geninfo_unexecuted_blocks=1 00:25:02.293 00:25:02.293 ' 00:25:02.293 04:08:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:02.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.293 --rc genhtml_branch_coverage=1 00:25:02.293 --rc genhtml_function_coverage=1 00:25:02.293 --rc genhtml_legend=1 00:25:02.293 --rc geninfo_all_blocks=1 00:25:02.293 --rc geninfo_unexecuted_blocks=1 00:25:02.293 00:25:02.293 ' 00:25:02.293 04:08:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:02.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.293 --rc genhtml_branch_coverage=1 00:25:02.293 --rc genhtml_function_coverage=1 00:25:02.293 --rc genhtml_legend=1 00:25:02.293 --rc geninfo_all_blocks=1 00:25:02.293 --rc geninfo_unexecuted_blocks=1 00:25:02.293 00:25:02.293 ' 00:25:02.293 04:08:37 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:02.293 04:08:37 -- nvmf/common.sh@7 -- # uname -s 00:25:02.293 04:08:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:02.293 04:08:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:02.293 04:08:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:02.293 04:08:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:02.293 04:08:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:02.293 04:08:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:02.293 04:08:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:02.293 04:08:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:02.293 04:08:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:02.293 04:08:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:02.293 04:08:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:25:02.293 04:08:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:25:02.293 04:08:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:02.294 04:08:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:02.294 04:08:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:02.294 04:08:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:02.294 04:08:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:02.294 04:08:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:02.294 04:08:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:02.294 04:08:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.294 04:08:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.294 04:08:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.294 04:08:37 -- paths/export.sh@5 -- # export PATH 00:25:02.294 04:08:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.294 04:08:37 -- nvmf/common.sh@46 -- # : 0 00:25:02.294 04:08:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:02.294 04:08:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:02.294 04:08:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:02.294 04:08:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:02.294 04:08:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:02.294 04:08:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:02.294 04:08:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:02.294 04:08:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:02.294 04:08:37 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:02.294 04:08:37 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:02.294 04:08:37 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:02.294 04:08:37 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:25:02.294 04:08:37 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:02.294 04:08:37 -- host/timeout.sh@19 -- # nvmftestinit 00:25:02.294 04:08:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:02.294 04:08:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:02.294 04:08:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:02.294 04:08:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:02.294 04:08:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:02.294 04:08:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.294 04:08:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:02.294 04:08:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.294 04:08:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:02.294 04:08:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:02.294 04:08:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:02.294 04:08:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:02.294 04:08:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:02.294 04:08:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:02.294 04:08:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:02.294 04:08:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:02.294 04:08:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:02.294 04:08:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:02.294 04:08:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:02.294 04:08:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:02.294 04:08:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:02.294 04:08:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:02.294 04:08:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:02.294 04:08:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:02.294 04:08:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:02.294 04:08:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:02.294 04:08:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:02.294 04:08:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:02.294 Cannot find device "nvmf_tgt_br" 00:25:02.294 04:08:37 -- nvmf/common.sh@154 -- # true 00:25:02.294 04:08:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:02.294 Cannot find device "nvmf_tgt_br2" 00:25:02.294 04:08:37 -- nvmf/common.sh@155 -- # true 00:25:02.294 04:08:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:02.294 04:08:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:02.294 Cannot find device "nvmf_tgt_br" 00:25:02.294 04:08:37 -- nvmf/common.sh@157 -- # true 00:25:02.294 04:08:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:02.294 Cannot find device "nvmf_tgt_br2" 00:25:02.294 04:08:37 -- nvmf/common.sh@158 -- # true 00:25:02.294 04:08:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:02.553 04:08:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:02.553 04:08:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:02.553 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:02.553 04:08:37 -- nvmf/common.sh@161 -- # true 00:25:02.553 04:08:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:02.553 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:02.553 04:08:37 -- nvmf/common.sh@162 -- # true 00:25:02.553 04:08:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:02.553 04:08:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:02.553 04:08:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:02.553 04:08:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:02.553 04:08:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:02.553 04:08:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:02.553 04:08:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:02.553 04:08:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:02.553 04:08:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:02.553 04:08:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:02.553 04:08:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:02.553 04:08:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:02.553 04:08:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:02.553 04:08:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:02.553 04:08:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:02.553 04:08:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:02.553 04:08:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:02.553 04:08:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:02.553 04:08:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:02.553 04:08:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:02.553 04:08:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:02.553 04:08:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:02.553 04:08:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:02.553 04:08:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:02.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:02.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:25:02.553 00:25:02.553 --- 10.0.0.2 ping statistics --- 00:25:02.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.553 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:25:02.553 04:08:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:02.553 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:02.553 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:25:02.553 00:25:02.553 --- 10.0.0.3 ping statistics --- 00:25:02.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.553 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:25:02.553 04:08:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:02.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:02.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:25:02.553 00:25:02.553 --- 10.0.0.1 ping statistics --- 00:25:02.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.553 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:25:02.811 04:08:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:02.811 04:08:37 -- nvmf/common.sh@421 -- # return 0 00:25:02.811 04:08:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:02.811 04:08:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:02.811 04:08:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:02.811 04:08:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:02.811 04:08:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:02.812 04:08:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:02.812 04:08:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:02.812 04:08:37 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:25:02.812 04:08:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:02.812 04:08:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:02.812 04:08:37 -- common/autotest_common.sh@10 -- # set +x 00:25:02.812 04:08:37 -- nvmf/common.sh@469 -- # nvmfpid=89800 00:25:02.812 04:08:37 -- nvmf/common.sh@470 -- # waitforlisten 89800 00:25:02.812 04:08:37 -- common/autotest_common.sh@829 -- # '[' -z 89800 ']' 00:25:02.812 04:08:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:02.812 04:08:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:02.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:02.812 04:08:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:02.812 04:08:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:02.812 04:08:37 -- common/autotest_common.sh@10 -- # set +x 00:25:02.812 04:08:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:02.812 [2024-11-08 04:08:37.748307] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:02.812 [2024-11-08 04:08:37.748429] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:02.812 [2024-11-08 04:08:37.884613] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:03.069 [2024-11-08 04:08:37.968381] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:03.069 [2024-11-08 04:08:37.968522] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:03.069 [2024-11-08 04:08:37.968535] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:03.069 [2024-11-08 04:08:37.968544] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:03.069 [2024-11-08 04:08:37.968735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:03.069 [2024-11-08 04:08:37.968748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.635 04:08:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:03.635 04:08:38 -- common/autotest_common.sh@862 -- # return 0 00:25:03.635 04:08:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:03.635 04:08:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:03.635 04:08:38 -- common/autotest_common.sh@10 -- # set +x 00:25:03.635 04:08:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:03.635 04:08:38 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:03.635 04:08:38 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:03.893 [2024-11-08 04:08:38.960833] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:03.893 04:08:38 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:04.156 Malloc0 00:25:04.157 04:08:39 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:04.417 04:08:39 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:04.676 04:08:39 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:04.935 [2024-11-08 04:08:39.908674] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:04.935 04:08:39 -- host/timeout.sh@32 -- # bdevperf_pid=89891 00:25:04.935 04:08:39 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:25:04.935 04:08:39 -- host/timeout.sh@34 -- # waitforlisten 89891 /var/tmp/bdevperf.sock 00:25:04.935 04:08:39 -- common/autotest_common.sh@829 -- # '[' -z 89891 ']' 00:25:04.935 04:08:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:04.935 04:08:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:04.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:04.935 04:08:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:04.935 04:08:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:04.935 04:08:39 -- common/autotest_common.sh@10 -- # set +x 00:25:04.935 [2024-11-08 04:08:39.985146] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:04.935 [2024-11-08 04:08:39.985256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89891 ] 00:25:05.194 [2024-11-08 04:08:40.128659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.194 [2024-11-08 04:08:40.227248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:05.762 04:08:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:05.762 04:08:40 -- common/autotest_common.sh@862 -- # return 0 00:25:05.762 04:08:40 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:06.329 04:08:41 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:25:06.329 NVMe0n1 00:25:06.329 04:08:41 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:06.329 04:08:41 -- host/timeout.sh@51 -- # rpc_pid=89933 00:25:06.329 04:08:41 -- host/timeout.sh@53 -- # sleep 1 00:25:06.588 Running I/O for 10 seconds... 00:25:07.528 04:08:42 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:07.528 [2024-11-08 04:08:42.603815] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.603884] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.603904] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.603912] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.603920] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.603928] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.603937] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.603944] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.603952] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.603960] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.603968] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.603975] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.603982] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.603992] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.604000] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.604008] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.604016] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.604024] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.604032] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.604039] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.604047] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.604054] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.604062] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.604069] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.604076] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.604083] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.604091] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.604098] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.604117] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.604125] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.604140] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.604149] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.604157] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.604166] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.604174] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.604182] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.604189] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.604198] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.604206] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.604213] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.604221] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.528 [2024-11-08 04:08:42.604229] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.529 [2024-11-08 04:08:42.604237] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.529 [2024-11-08 04:08:42.604245] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.529 [2024-11-08 04:08:42.604252] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.529 [2024-11-08 04:08:42.604260] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.529 [2024-11-08 04:08:42.604268] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.529 [2024-11-08 04:08:42.604276] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.529 [2024-11-08 04:08:42.604284] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.529 [2024-11-08 04:08:42.604291] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.529 [2024-11-08 04:08:42.604299] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.529 [2024-11-08 04:08:42.604309] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.529 [2024-11-08 04:08:42.604317] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.529 [2024-11-08 04:08:42.604325] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.529 [2024-11-08 04:08:42.604333] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.529 [2024-11-08 04:08:42.604341] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.529 [2024-11-08 04:08:42.604349] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.529 [2024-11-08 04:08:42.604357] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.529 [2024-11-08 04:08:42.604365] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.529 [2024-11-08 04:08:42.604374] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.529 [2024-11-08 04:08:42.604382] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.529 [2024-11-08 04:08:42.604390] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.529 [2024-11-08 04:08:42.604398] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.529 [2024-11-08 04:08:42.604406] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.529 [2024-11-08 04:08:42.604427] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c5a40 is same with the state(5) to be set 00:25:07.529 [2024-11-08 04:08:42.604927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.529 [2024-11-08 04:08:42.604982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.529 [2024-11-08 04:08:42.605003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.529 [2024-11-08 04:08:42.605013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.529 [2024-11-08 04:08:42.605025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.529 [2024-11-08 04:08:42.605034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.529 [2024-11-08 04:08:42.605044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.529 [2024-11-08 04:08:42.605052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.529 [2024-11-08 04:08:42.605062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.529 [2024-11-08 04:08:42.605084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.529 [2024-11-08 04:08:42.605094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.529 [2024-11-08 04:08:42.605102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.529 [2024-11-08 04:08:42.605111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.529 [2024-11-08 04:08:42.605119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.529 [2024-11-08 04:08:42.605129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.529 [2024-11-08 04:08:42.605137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.529 [2024-11-08 04:08:42.605146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.529 [2024-11-08 04:08:42.605154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.529 [2024-11-08 04:08:42.605164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.529 [2024-11-08 04:08:42.605172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.529 [2024-11-08 04:08:42.605197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.529 [2024-11-08 04:08:42.605205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.529 [2024-11-08 04:08:42.605215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.529 [2024-11-08 04:08:42.605223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.529 [2024-11-08 04:08:42.605233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.529 [2024-11-08 04:08:42.605241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.529 [2024-11-08 04:08:42.605251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.529 [2024-11-08 04:08:42.605260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.529 [2024-11-08 04:08:42.605270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.529 [2024-11-08 04:08:42.605278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.529 [2024-11-08 04:08:42.605288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.529 [2024-11-08 04:08:42.605296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.529 [2024-11-08 04:08:42.605306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.529 [2024-11-08 04:08:42.605316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.529 [2024-11-08 04:08:42.605327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.529 [2024-11-08 04:08:42.605336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.529 [2024-11-08 04:08:42.605346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.529 [2024-11-08 04:08:42.605355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.529 [2024-11-08 04:08:42.605365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.529 [2024-11-08 04:08:42.605373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.529 [2024-11-08 04:08:42.605383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.529 [2024-11-08 04:08:42.605391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.529 [2024-11-08 04:08:42.605401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.529 [2024-11-08 04:08:42.605409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.529 [2024-11-08 04:08:42.605435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.529 [2024-11-08 04:08:42.605461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.529 [2024-11-08 04:08:42.605472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.529 [2024-11-08 04:08:42.605481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.529 [2024-11-08 04:08:42.605503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.529 [2024-11-08 04:08:42.605523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.529 [2024-11-08 04:08:42.605535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.529 [2024-11-08 04:08:42.605545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.529 [2024-11-08 04:08:42.605556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.529 [2024-11-08 04:08:42.605566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.529 [2024-11-08 04:08:42.605577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.529 [2024-11-08 04:08:42.605587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.529 [2024-11-08 04:08:42.605598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.529 [2024-11-08 04:08:42.605608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.605619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.530 [2024-11-08 04:08:42.605628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.605639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.530 [2024-11-08 04:08:42.605648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.605659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.530 [2024-11-08 04:08:42.605668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.605678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.530 [2024-11-08 04:08:42.605688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.605699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.530 [2024-11-08 04:08:42.605708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.605719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.530 [2024-11-08 04:08:42.605728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.605738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.530 [2024-11-08 04:08:42.605747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.605758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.530 [2024-11-08 04:08:42.605767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.605778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.530 [2024-11-08 04:08:42.605787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.605798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.530 [2024-11-08 04:08:42.605806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.605817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.530 [2024-11-08 04:08:42.605840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.605865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.530 [2024-11-08 04:08:42.605874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.605885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.530 [2024-11-08 04:08:42.605908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.605918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.530 [2024-11-08 04:08:42.605927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.605938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.530 [2024-11-08 04:08:42.605946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.605956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.530 [2024-11-08 04:08:42.605964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.605974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.530 [2024-11-08 04:08:42.605982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.605992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.530 [2024-11-08 04:08:42.606000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.606010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.530 [2024-11-08 04:08:42.606018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.606028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.530 [2024-11-08 04:08:42.606037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.606047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.530 [2024-11-08 04:08:42.606057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.606067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.530 [2024-11-08 04:08:42.606075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.606085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.530 [2024-11-08 04:08:42.606093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.606103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.530 [2024-11-08 04:08:42.606111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.606121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.530 [2024-11-08 04:08:42.606129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.606139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.530 [2024-11-08 04:08:42.606148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.606158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.530 [2024-11-08 04:08:42.606166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.606176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.530 [2024-11-08 04:08:42.606184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.606194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.530 [2024-11-08 04:08:42.606202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.606212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.530 [2024-11-08 04:08:42.606220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.606230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.530 [2024-11-08 04:08:42.606239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.606249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.530 [2024-11-08 04:08:42.606256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.606266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.530 [2024-11-08 04:08:42.606274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.606288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.530 [2024-11-08 04:08:42.606302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.606316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.530 [2024-11-08 04:08:42.606335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.606349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.530 [2024-11-08 04:08:42.606362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.606375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.530 [2024-11-08 04:08:42.606387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.606401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.530 [2024-11-08 04:08:42.606413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.606459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.530 [2024-11-08 04:08:42.606473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.606503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.530 [2024-11-08 04:08:42.606518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.530 [2024-11-08 04:08:42.606534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.531 [2024-11-08 04:08:42.606548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.606563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.531 [2024-11-08 04:08:42.606576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.606591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.531 [2024-11-08 04:08:42.606604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.606619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.531 [2024-11-08 04:08:42.606633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.606648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.531 [2024-11-08 04:08:42.606661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.606677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.531 [2024-11-08 04:08:42.606690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.606705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.531 [2024-11-08 04:08:42.606718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.606733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.531 [2024-11-08 04:08:42.606745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.606761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.531 [2024-11-08 04:08:42.606775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.606791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.531 [2024-11-08 04:08:42.606818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.606847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.531 [2024-11-08 04:08:42.606865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.606879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.531 [2024-11-08 04:08:42.606891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.606905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.531 [2024-11-08 04:08:42.606917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.606931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.531 [2024-11-08 04:08:42.606942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.606956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.531 [2024-11-08 04:08:42.606968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.606981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.531 [2024-11-08 04:08:42.606992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.607005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.531 [2024-11-08 04:08:42.607017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.607031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.531 [2024-11-08 04:08:42.607042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.607056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.531 [2024-11-08 04:08:42.607068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.607082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.531 [2024-11-08 04:08:42.607093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.607107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.531 [2024-11-08 04:08:42.607118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.607132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.531 [2024-11-08 04:08:42.607144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.607159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.531 [2024-11-08 04:08:42.607171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.607184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.531 [2024-11-08 04:08:42.607196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.607210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.531 [2024-11-08 04:08:42.607222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.607245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.531 [2024-11-08 04:08:42.607259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.607269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.531 [2024-11-08 04:08:42.607283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.607294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.531 [2024-11-08 04:08:42.607302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.607312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.531 [2024-11-08 04:08:42.607320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.607330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.531 [2024-11-08 04:08:42.607338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.607347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.531 [2024-11-08 04:08:42.607355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.607365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.531 [2024-11-08 04:08:42.607373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.607383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.531 [2024-11-08 04:08:42.607391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.607401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.531 [2024-11-08 04:08:42.607409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.607463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.531 [2024-11-08 04:08:42.607474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.607485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.531 [2024-11-08 04:08:42.607494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.607505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.531 [2024-11-08 04:08:42.607514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.607524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.531 [2024-11-08 04:08:42.607533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.607544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.531 [2024-11-08 04:08:42.607553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.531 [2024-11-08 04:08:42.607564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.531 [2024-11-08 04:08:42.607572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.532 [2024-11-08 04:08:42.607583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.532 [2024-11-08 04:08:42.607592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.532 [2024-11-08 04:08:42.607603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.532 [2024-11-08 04:08:42.607612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.532 [2024-11-08 04:08:42.607622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.532 [2024-11-08 04:08:42.607636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.532 [2024-11-08 04:08:42.607647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.532 [2024-11-08 04:08:42.607656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.532 [2024-11-08 04:08:42.607667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.532 [2024-11-08 04:08:42.607676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.532 [2024-11-08 04:08:42.607686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.532 [2024-11-08 04:08:42.607695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.532 [2024-11-08 04:08:42.607706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.532 [2024-11-08 04:08:42.607714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.532 [2024-11-08 04:08:42.607725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.532 [2024-11-08 04:08:42.607733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.532 [2024-11-08 04:08:42.607744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.532 [2024-11-08 04:08:42.607753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.532 [2024-11-08 04:08:42.607764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.532 [2024-11-08 04:08:42.607773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.532 [2024-11-08 04:08:42.607784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.532 [2024-11-08 04:08:42.607808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.532 [2024-11-08 04:08:42.607834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.532 [2024-11-08 04:08:42.607842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.532 [2024-11-08 04:08:42.607852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.532 [2024-11-08 04:08:42.607860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.532 [2024-11-08 04:08:42.607870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.532 [2024-11-08 04:08:42.607877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.532 [2024-11-08 04:08:42.607887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.532 [2024-11-08 04:08:42.607895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.532 [2024-11-08 04:08:42.607904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.532 [2024-11-08 04:08:42.607912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.532 [2024-11-08 04:08:42.607921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.532 [2024-11-08 04:08:42.607929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.532 [2024-11-08 04:08:42.607939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.532 [2024-11-08 04:08:42.607947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.532 [2024-11-08 04:08:42.607956] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd74050 is same with the state(5) to be set 00:25:07.532 [2024-11-08 04:08:42.607973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.532 [2024-11-08 04:08:42.607980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.532 [2024-11-08 04:08:42.607988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7368 len:8 PRP1 0x0 PRP2 0x0 00:25:07.532 [2024-11-08 04:08:42.607997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.532 [2024-11-08 04:08:42.608050] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd74050 was disconnected and freed. reset controller. 00:25:07.532 [2024-11-08 04:08:42.608263] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:07.532 [2024-11-08 04:08:42.608347] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfedc0 (9): Bad file descriptor 00:25:07.532 [2024-11-08 04:08:42.608515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.532 [2024-11-08 04:08:42.608575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.532 [2024-11-08 04:08:42.608592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfedc0 with addr=10.0.0.2, port=4420 00:25:07.532 [2024-11-08 04:08:42.608603] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfedc0 is same with the state(5) to be set 00:25:07.532 [2024-11-08 04:08:42.608621] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfedc0 (9): Bad file descriptor 00:25:07.532 [2024-11-08 04:08:42.608638] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:07.532 [2024-11-08 04:08:42.608648] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:07.532 [2024-11-08 04:08:42.608658] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:07.532 [2024-11-08 04:08:42.608678] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:07.532 [2024-11-08 04:08:42.608689] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:07.532 04:08:42 -- host/timeout.sh@56 -- # sleep 2 00:25:10.110 [2024-11-08 04:08:44.608777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.110 [2024-11-08 04:08:44.608866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.110 [2024-11-08 04:08:44.608883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfedc0 with addr=10.0.0.2, port=4420 00:25:10.110 [2024-11-08 04:08:44.608896] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfedc0 is same with the state(5) to be set 00:25:10.110 [2024-11-08 04:08:44.608917] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfedc0 (9): Bad file descriptor 00:25:10.110 [2024-11-08 04:08:44.608935] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.110 [2024-11-08 04:08:44.608944] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.110 [2024-11-08 04:08:44.608969] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.110 [2024-11-08 04:08:44.609022] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.110 [2024-11-08 04:08:44.609032] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.110 04:08:44 -- host/timeout.sh@57 -- # get_controller 00:25:10.110 04:08:44 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:10.110 04:08:44 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:25:10.110 04:08:44 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:25:10.110 04:08:44 -- host/timeout.sh@58 -- # get_bdev 00:25:10.110 04:08:44 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:25:10.110 04:08:44 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:25:10.110 04:08:45 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:25:10.110 04:08:45 -- host/timeout.sh@61 -- # sleep 5 00:25:12.013 [2024-11-08 04:08:46.609146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.013 [2024-11-08 04:08:46.609232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.013 [2024-11-08 04:08:46.609249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfedc0 with addr=10.0.0.2, port=4420 00:25:12.013 [2024-11-08 04:08:46.609261] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfedc0 is same with the state(5) to be set 00:25:12.013 [2024-11-08 04:08:46.609285] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfedc0 (9): Bad file descriptor 00:25:12.013 [2024-11-08 04:08:46.609303] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.013 [2024-11-08 04:08:46.609312] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.013 [2024-11-08 04:08:46.609321] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.013 [2024-11-08 04:08:46.609346] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.013 [2024-11-08 04:08:46.609357] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.917 [2024-11-08 04:08:48.609392] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.917 [2024-11-08 04:08:48.609435] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.917 [2024-11-08 04:08:48.609462] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.917 [2024-11-08 04:08:48.609470] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:25:13.917 [2024-11-08 04:08:48.609489] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.853 00:25:14.853 Latency(us) 00:25:14.853 [2024-11-08T04:08:49.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:14.853 [2024-11-08T04:08:49.964Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:14.853 Verification LBA range: start 0x0 length 0x4000 00:25:14.853 NVMe0n1 : 8.12 2125.38 8.30 15.77 0.00 59709.26 2427.81 7015926.69 00:25:14.853 [2024-11-08T04:08:49.964Z] =================================================================================================================== 00:25:14.853 [2024-11-08T04:08:49.964Z] Total : 2125.38 8.30 15.77 0.00 59709.26 2427.81 7015926.69 00:25:14.853 0 00:25:15.112 04:08:50 -- host/timeout.sh@62 -- # get_controller 00:25:15.112 04:08:50 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:15.112 04:08:50 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:25:15.370 04:08:50 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:25:15.370 04:08:50 -- host/timeout.sh@63 -- # get_bdev 00:25:15.370 04:08:50 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:25:15.371 04:08:50 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:25:15.629 04:08:50 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:25:15.629 04:08:50 -- host/timeout.sh@65 -- # wait 89933 00:25:15.629 04:08:50 -- host/timeout.sh@67 -- # killprocess 89891 00:25:15.629 04:08:50 -- common/autotest_common.sh@936 -- # '[' -z 89891 ']' 00:25:15.629 04:08:50 -- common/autotest_common.sh@940 -- # kill -0 89891 00:25:15.629 04:08:50 -- common/autotest_common.sh@941 -- # uname 00:25:15.629 04:08:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:15.629 04:08:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89891 00:25:15.629 04:08:50 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:15.629 04:08:50 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:15.629 04:08:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89891' 00:25:15.629 killing process with pid 89891 00:25:15.629 04:08:50 -- common/autotest_common.sh@955 -- # kill 89891 00:25:15.629 Received shutdown signal, test time was about 9.246369 seconds 00:25:15.629 00:25:15.629 Latency(us) 00:25:15.629 [2024-11-08T04:08:50.740Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.629 [2024-11-08T04:08:50.740Z] =================================================================================================================== 00:25:15.629 [2024-11-08T04:08:50.740Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:15.629 04:08:50 -- common/autotest_common.sh@960 -- # wait 89891 00:25:15.888 04:08:50 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:16.147 [2024-11-08 04:08:51.132245] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:16.147 04:08:51 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:25:16.147 04:08:51 -- host/timeout.sh@74 -- # bdevperf_pid=90092 00:25:16.147 04:08:51 -- host/timeout.sh@76 -- # waitforlisten 90092 /var/tmp/bdevperf.sock 00:25:16.147 04:08:51 -- common/autotest_common.sh@829 -- # '[' -z 90092 ']' 00:25:16.147 04:08:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:16.147 04:08:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:16.147 04:08:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:16.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:16.147 04:08:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:16.147 04:08:51 -- common/autotest_common.sh@10 -- # set +x 00:25:16.147 [2024-11-08 04:08:51.188710] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:16.147 [2024-11-08 04:08:51.188805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90092 ] 00:25:16.406 [2024-11-08 04:08:51.316647] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.406 [2024-11-08 04:08:51.402402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:17.342 04:08:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:17.342 04:08:52 -- common/autotest_common.sh@862 -- # return 0 00:25:17.342 04:08:52 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:17.342 04:08:52 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:25:17.601 NVMe0n1 00:25:17.601 04:08:52 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:17.601 04:08:52 -- host/timeout.sh@84 -- # rpc_pid=90138 00:25:17.601 04:08:52 -- host/timeout.sh@86 -- # sleep 1 00:25:17.859 Running I/O for 10 seconds... 00:25:18.795 04:08:53 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:19.056 [2024-11-08 04:08:53.911102] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.056 [2024-11-08 04:08:53.911157] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.056 [2024-11-08 04:08:53.911166] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.056 [2024-11-08 04:08:53.911174] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.056 [2024-11-08 04:08:53.911183] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.056 [2024-11-08 04:08:53.911190] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.056 [2024-11-08 04:08:53.911197] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.056 [2024-11-08 04:08:53.911204] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.056 [2024-11-08 04:08:53.911211] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.056 [2024-11-08 04:08:53.911218] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.056 [2024-11-08 04:08:53.911225] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.056 [2024-11-08 04:08:53.911232] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.056 [2024-11-08 04:08:53.911239] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.056 [2024-11-08 04:08:53.911247] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.056 [2024-11-08 04:08:53.911254] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.056 [2024-11-08 04:08:53.911261] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.056 [2024-11-08 04:08:53.911268] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.056 [2024-11-08 04:08:53.911276] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.057 [2024-11-08 04:08:53.911283] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.057 [2024-11-08 04:08:53.911291] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.057 [2024-11-08 04:08:53.911298] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.057 [2024-11-08 04:08:53.911305] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.057 [2024-11-08 04:08:53.911312] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.057 [2024-11-08 04:08:53.911319] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.057 [2024-11-08 04:08:53.911325] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.057 [2024-11-08 04:08:53.911332] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.057 [2024-11-08 04:08:53.911350] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.057 [2024-11-08 04:08:53.911357] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.057 [2024-11-08 04:08:53.911365] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.057 [2024-11-08 04:08:53.911371] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.057 [2024-11-08 04:08:53.911379] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.057 [2024-11-08 04:08:53.911387] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.057 [2024-11-08 04:08:53.911396] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.057 [2024-11-08 04:08:53.911403] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.057 [2024-11-08 04:08:53.911411] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.057 [2024-11-08 04:08:53.911442] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.057 [2024-11-08 04:08:53.911460] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.057 [2024-11-08 04:08:53.911468] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.057 [2024-11-08 04:08:53.911475] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.057 [2024-11-08 04:08:53.911482] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.057 [2024-11-08 04:08:53.911489] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.057 [2024-11-08 04:08:53.911496] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.057 [2024-11-08 04:08:53.911503] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.057 [2024-11-08 04:08:53.911509] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.057 [2024-11-08 04:08:53.911516] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.057 [2024-11-08 04:08:53.911523] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.057 [2024-11-08 04:08:53.911529] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.057 [2024-11-08 04:08:53.911535] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2b70 is same with the state(5) to be set 00:25:19.057 [2024-11-08 04:08:53.912010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.057 [2024-11-08 04:08:53.912038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.057 [2024-11-08 04:08:53.912058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.057 [2024-11-08 04:08:53.912067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.057 [2024-11-08 04:08:53.912077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.057 [2024-11-08 04:08:53.912085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.057 [2024-11-08 04:08:53.912096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.057 [2024-11-08 04:08:53.912104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.057 [2024-11-08 04:08:53.912114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.057 [2024-11-08 04:08:53.912122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.057 [2024-11-08 04:08:53.912132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.057 [2024-11-08 04:08:53.912140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.057 [2024-11-08 04:08:53.912149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.057 [2024-11-08 04:08:53.912157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.057 [2024-11-08 04:08:53.912166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.057 [2024-11-08 04:08:53.912173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.057 [2024-11-08 04:08:53.912183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.057 [2024-11-08 04:08:53.912191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.057 [2024-11-08 04:08:53.912200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.057 [2024-11-08 04:08:53.912208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.057 [2024-11-08 04:08:53.912217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.057 [2024-11-08 04:08:53.912225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.057 [2024-11-08 04:08:53.912234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.057 [2024-11-08 04:08:53.912242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.057 [2024-11-08 04:08:53.912252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.057 [2024-11-08 04:08:53.912259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.057 [2024-11-08 04:08:53.912268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.057 [2024-11-08 04:08:53.912275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.057 [2024-11-08 04:08:53.912286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.057 [2024-11-08 04:08:53.912294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.057 [2024-11-08 04:08:53.912304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.057 [2024-11-08 04:08:53.912312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.057 [2024-11-08 04:08:53.912322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.057 [2024-11-08 04:08:53.912330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.057 [2024-11-08 04:08:53.912340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.057 [2024-11-08 04:08:53.912348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.057 [2024-11-08 04:08:53.912357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.057 [2024-11-08 04:08:53.912365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.057 [2024-11-08 04:08:53.912375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.057 [2024-11-08 04:08:53.912382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.057 [2024-11-08 04:08:53.912391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.057 [2024-11-08 04:08:53.912399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.057 [2024-11-08 04:08:53.912408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.057 [2024-11-08 04:08:53.912415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.057 [2024-11-08 04:08:53.912442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.057 [2024-11-08 04:08:53.912466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.057 [2024-11-08 04:08:53.912476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.057 [2024-11-08 04:08:53.912484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.057 [2024-11-08 04:08:53.912494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.057 [2024-11-08 04:08:53.912502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.057 [2024-11-08 04:08:53.912512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.057 [2024-11-08 04:08:53.912534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.912546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.058 [2024-11-08 04:08:53.912554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.912564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.058 [2024-11-08 04:08:53.912573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.912584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.058 [2024-11-08 04:08:53.912593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.912603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.058 [2024-11-08 04:08:53.912611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.912621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.058 [2024-11-08 04:08:53.912629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.912640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.058 [2024-11-08 04:08:53.912649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.912659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.058 [2024-11-08 04:08:53.912668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.912678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.058 [2024-11-08 04:08:53.912686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.912697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.058 [2024-11-08 04:08:53.912705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.912715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.058 [2024-11-08 04:08:53.912723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.912733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.058 [2024-11-08 04:08:53.912741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.912751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.058 [2024-11-08 04:08:53.912766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.912776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.058 [2024-11-08 04:08:53.912800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.912809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.058 [2024-11-08 04:08:53.912832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.912857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.058 [2024-11-08 04:08:53.912866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.912875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.058 [2024-11-08 04:08:53.912884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.912893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.058 [2024-11-08 04:08:53.912901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.912910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.058 [2024-11-08 04:08:53.912917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.912927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.058 [2024-11-08 04:08:53.912934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.912944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.058 [2024-11-08 04:08:53.912958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.912967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.058 [2024-11-08 04:08:53.912975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.912984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.058 [2024-11-08 04:08:53.912992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.913001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.058 [2024-11-08 04:08:53.913009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.913018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.058 [2024-11-08 04:08:53.913026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.913035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.058 [2024-11-08 04:08:53.913042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.913051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.058 [2024-11-08 04:08:53.913059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.913068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.058 [2024-11-08 04:08:53.913076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.913086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.058 [2024-11-08 04:08:53.913093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.913102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.058 [2024-11-08 04:08:53.913110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.913120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.058 [2024-11-08 04:08:53.913128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.913137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.058 [2024-11-08 04:08:53.913145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.913154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.058 [2024-11-08 04:08:53.913162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.913170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.058 [2024-11-08 04:08:53.913178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.913187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.058 [2024-11-08 04:08:53.913194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.913204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.058 [2024-11-08 04:08:53.913211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.913220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.058 [2024-11-08 04:08:53.913228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.913238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.058 [2024-11-08 04:08:53.913245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.913254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.058 [2024-11-08 04:08:53.913262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.913271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.058 [2024-11-08 04:08:53.913279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.058 [2024-11-08 04:08:53.913287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.058 [2024-11-08 04:08:53.913295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.913304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.059 [2024-11-08 04:08:53.913312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.913320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.059 [2024-11-08 04:08:53.913328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.913337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.059 [2024-11-08 04:08:53.913344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.913353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.059 [2024-11-08 04:08:53.913360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.913369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.059 [2024-11-08 04:08:53.913377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.913386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.059 [2024-11-08 04:08:53.913394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.913403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.059 [2024-11-08 04:08:53.913410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.913437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.059 [2024-11-08 04:08:53.913461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.913471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.059 [2024-11-08 04:08:53.913480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.913498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.059 [2024-11-08 04:08:53.913508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.913543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.059 [2024-11-08 04:08:53.913553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.913565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.059 [2024-11-08 04:08:53.913574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.913586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.059 [2024-11-08 04:08:53.913595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.913606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.059 [2024-11-08 04:08:53.913615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.913626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.059 [2024-11-08 04:08:53.913635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.913647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.059 [2024-11-08 04:08:53.913656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.913667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.059 [2024-11-08 04:08:53.913676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.913687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.059 [2024-11-08 04:08:53.913696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.913707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.059 [2024-11-08 04:08:53.913716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.913727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.059 [2024-11-08 04:08:53.913736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.913747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.059 [2024-11-08 04:08:53.913756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.913767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.059 [2024-11-08 04:08:53.913776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.913786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.059 [2024-11-08 04:08:53.913796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.913806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.059 [2024-11-08 04:08:53.913817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.913828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.059 [2024-11-08 04:08:53.913837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.913863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.059 [2024-11-08 04:08:53.913886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.913896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.059 [2024-11-08 04:08:53.913905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.913914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.059 [2024-11-08 04:08:53.913922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.913947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.059 [2024-11-08 04:08:53.913954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.913964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.059 [2024-11-08 04:08:53.913972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.913981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.059 [2024-11-08 04:08:53.913988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.913998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.059 [2024-11-08 04:08:53.914005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.914015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.059 [2024-11-08 04:08:53.914023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.914032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.059 [2024-11-08 04:08:53.914040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.914049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.059 [2024-11-08 04:08:53.914057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.914066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.059 [2024-11-08 04:08:53.914074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.914083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.059 [2024-11-08 04:08:53.914091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.914100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.059 [2024-11-08 04:08:53.914109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.914118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.059 [2024-11-08 04:08:53.914125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.059 [2024-11-08 04:08:53.914135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.060 [2024-11-08 04:08:53.914143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.060 [2024-11-08 04:08:53.914153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.060 [2024-11-08 04:08:53.914161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.060 [2024-11-08 04:08:53.914171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.060 [2024-11-08 04:08:53.914178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.060 [2024-11-08 04:08:53.914187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.060 [2024-11-08 04:08:53.914195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.060 [2024-11-08 04:08:53.914204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.060 [2024-11-08 04:08:53.914212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.060 [2024-11-08 04:08:53.914222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.060 [2024-11-08 04:08:53.914230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.060 [2024-11-08 04:08:53.914239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.060 [2024-11-08 04:08:53.914247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.060 [2024-11-08 04:08:53.914272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.060 [2024-11-08 04:08:53.914280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.060 [2024-11-08 04:08:53.914289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.060 [2024-11-08 04:08:53.914297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.060 [2024-11-08 04:08:53.914307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.060 [2024-11-08 04:08:53.914329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.060 [2024-11-08 04:08:53.914339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.060 [2024-11-08 04:08:53.914347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.060 [2024-11-08 04:08:53.914356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.060 [2024-11-08 04:08:53.914364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.060 [2024-11-08 04:08:53.914373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.060 [2024-11-08 04:08:53.914381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.060 [2024-11-08 04:08:53.914390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.060 [2024-11-08 04:08:53.914398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.060 [2024-11-08 04:08:53.914407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.060 [2024-11-08 04:08:53.914415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.060 [2024-11-08 04:08:53.914441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.060 [2024-11-08 04:08:53.914449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.060 [2024-11-08 04:08:53.914459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.060 [2024-11-08 04:08:53.914472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.060 [2024-11-08 04:08:53.914482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.060 [2024-11-08 04:08:53.914490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.060 [2024-11-08 04:08:53.914499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.060 [2024-11-08 04:08:53.914507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.060 [2024-11-08 04:08:53.914524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.060 [2024-11-08 04:08:53.914538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.060 [2024-11-08 04:08:53.914549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.060 [2024-11-08 04:08:53.914557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.060 [2024-11-08 04:08:53.914566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.060 [2024-11-08 04:08:53.914574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.060 [2024-11-08 04:08:53.914583] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825050 is same with the state(5) to be set 00:25:19.060 [2024-11-08 04:08:53.914594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.060 [2024-11-08 04:08:53.914601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.060 [2024-11-08 04:08:53.914608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16664 len:8 PRP1 0x0 PRP2 0x0 00:25:19.060 [2024-11-08 04:08:53.914631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.060 [2024-11-08 04:08:53.914699] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1825050 was disconnected and freed. reset controller. 00:25:19.060 [2024-11-08 04:08:53.914978] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.060 [2024-11-08 04:08:53.915067] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17afdc0 (9): Bad file descriptor 00:25:19.060 [2024-11-08 04:08:53.915170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-11-08 04:08:53.915214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-11-08 04:08:53.915229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17afdc0 with addr=10.0.0.2, port=4420 00:25:19.060 [2024-11-08 04:08:53.915239] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17afdc0 is same with the state(5) to be set 00:25:19.060 [2024-11-08 04:08:53.915257] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17afdc0 (9): Bad file descriptor 00:25:19.060 [2024-11-08 04:08:53.915271] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.060 [2024-11-08 04:08:53.915280] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.060 [2024-11-08 04:08:53.915291] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.060 [2024-11-08 04:08:53.915310] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.060 [2024-11-08 04:08:53.915320] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.060 04:08:53 -- host/timeout.sh@90 -- # sleep 1 00:25:19.997 [2024-11-08 04:08:54.915388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.997 [2024-11-08 04:08:54.915476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.997 [2024-11-08 04:08:54.915493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17afdc0 with addr=10.0.0.2, port=4420 00:25:19.997 [2024-11-08 04:08:54.915502] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17afdc0 is same with the state(5) to be set 00:25:19.997 [2024-11-08 04:08:54.915520] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17afdc0 (9): Bad file descriptor 00:25:19.997 [2024-11-08 04:08:54.915535] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.997 [2024-11-08 04:08:54.915543] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.997 [2024-11-08 04:08:54.915552] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.997 [2024-11-08 04:08:54.915570] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.997 [2024-11-08 04:08:54.915580] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.997 04:08:54 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:20.255 [2024-11-08 04:08:55.181154] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:20.255 04:08:55 -- host/timeout.sh@92 -- # wait 90138 00:25:21.191 [2024-11-08 04:08:55.935156] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:27.755 00:25:27.755 Latency(us) 00:25:27.755 [2024-11-08T04:09:02.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.755 [2024-11-08T04:09:02.866Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:27.755 Verification LBA range: start 0x0 length 0x4000 00:25:27.755 NVMe0n1 : 10.01 11423.16 44.62 0.00 0.00 11190.69 997.93 3019898.88 00:25:27.755 [2024-11-08T04:09:02.866Z] =================================================================================================================== 00:25:27.755 [2024-11-08T04:09:02.866Z] Total : 11423.16 44.62 0.00 0.00 11190.69 997.93 3019898.88 00:25:27.755 0 00:25:27.755 04:09:02 -- host/timeout.sh@97 -- # rpc_pid=90256 00:25:27.755 04:09:02 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:27.755 04:09:02 -- host/timeout.sh@98 -- # sleep 1 00:25:28.020 Running I/O for 10 seconds... 00:25:28.955 04:09:03 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:29.217 [2024-11-08 04:09:04.083167] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083236] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083255] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083263] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083270] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083278] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083287] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083296] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083304] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083311] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083319] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083327] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083335] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083342] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083350] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083357] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083365] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083373] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083381] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083389] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083396] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083403] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083411] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083437] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083446] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083453] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083461] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083468] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083476] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083484] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083492] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083515] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083522] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083531] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083539] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083546] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083554] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083561] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083569] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083576] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083583] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083591] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083598] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fc70 is same with the state(5) to be set 00:25:29.217 [2024-11-08 04:09:04.083933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.217 [2024-11-08 04:09:04.083990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.217 [2024-11-08 04:09:04.084012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.217 [2024-11-08 04:09:04.084023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.217 [2024-11-08 04:09:04.084034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.217 [2024-11-08 04:09:04.084043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.217 [2024-11-08 04:09:04.084053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.217 [2024-11-08 04:09:04.084061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.217 [2024-11-08 04:09:04.084072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.217 [2024-11-08 04:09:04.084082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.217 [2024-11-08 04:09:04.084092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.217 [2024-11-08 04:09:04.084101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.217 [2024-11-08 04:09:04.084112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.217 [2024-11-08 04:09:04.084135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.217 [2024-11-08 04:09:04.084145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.217 [2024-11-08 04:09:04.084153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.217 [2024-11-08 04:09:04.084164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.217 [2024-11-08 04:09:04.084173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.217 [2024-11-08 04:09:04.084183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.217 [2024-11-08 04:09:04.084191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.217 [2024-11-08 04:09:04.084201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.217 [2024-11-08 04:09:04.084209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.217 [2024-11-08 04:09:04.084220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.217 [2024-11-08 04:09:04.084229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.217 [2024-11-08 04:09:04.084239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.217 [2024-11-08 04:09:04.084262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.217 [2024-11-08 04:09:04.084272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.217 [2024-11-08 04:09:04.084296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.217 [2024-11-08 04:09:04.084306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.217 [2024-11-08 04:09:04.084314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.084324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.218 [2024-11-08 04:09:04.084332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.084342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.218 [2024-11-08 04:09:04.084350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.084360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.218 [2024-11-08 04:09:04.084368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.084378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.218 [2024-11-08 04:09:04.084388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.084398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.218 [2024-11-08 04:09:04.084406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.084416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.218 [2024-11-08 04:09:04.084441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.084452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.218 [2024-11-08 04:09:04.084461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.084483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.218 [2024-11-08 04:09:04.084495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.084506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.218 [2024-11-08 04:09:04.084515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.084526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.218 [2024-11-08 04:09:04.084535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.084546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.218 [2024-11-08 04:09:04.084555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.084566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.218 [2024-11-08 04:09:04.084575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.084586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.218 [2024-11-08 04:09:04.084595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.084607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.218 [2024-11-08 04:09:04.084616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.084627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.218 [2024-11-08 04:09:04.084636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.084647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.218 [2024-11-08 04:09:04.084656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.084667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.218 [2024-11-08 04:09:04.084676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.084688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.218 [2024-11-08 04:09:04.084698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.084709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.218 [2024-11-08 04:09:04.084718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.084729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.218 [2024-11-08 04:09:04.084738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.084749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.218 [2024-11-08 04:09:04.084758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.084769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.218 [2024-11-08 04:09:04.084779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.084805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.218 [2024-11-08 04:09:04.084828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.084853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.218 [2024-11-08 04:09:04.084861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.084872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.218 [2024-11-08 04:09:04.084879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.084890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.218 [2024-11-08 04:09:04.084898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.084908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.218 [2024-11-08 04:09:04.084915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.084925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.218 [2024-11-08 04:09:04.084933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.084942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.218 [2024-11-08 04:09:04.084950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.084960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.218 [2024-11-08 04:09:04.084968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.084978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.218 [2024-11-08 04:09:04.084986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.084997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.218 [2024-11-08 04:09:04.085005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.085015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.218 [2024-11-08 04:09:04.085030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.085040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.218 [2024-11-08 04:09:04.085049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.085060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.218 [2024-11-08 04:09:04.085070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.085080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.218 [2024-11-08 04:09:04.085089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.085099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.218 [2024-11-08 04:09:04.085108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.085118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.218 [2024-11-08 04:09:04.085125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.085135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.218 [2024-11-08 04:09:04.085144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.218 [2024-11-08 04:09:04.085153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.218 [2024-11-08 04:09:04.085161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.085171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.219 [2024-11-08 04:09:04.085179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.085189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.219 [2024-11-08 04:09:04.085198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.085208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.219 [2024-11-08 04:09:04.085216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.085226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.219 [2024-11-08 04:09:04.085234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.085244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.219 [2024-11-08 04:09:04.085252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.085263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.219 [2024-11-08 04:09:04.085272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.085282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.219 [2024-11-08 04:09:04.085291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.085301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.219 [2024-11-08 04:09:04.085309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.085319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.219 [2024-11-08 04:09:04.085332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.085342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.219 [2024-11-08 04:09:04.085350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.085360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.219 [2024-11-08 04:09:04.085368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.085378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.219 [2024-11-08 04:09:04.085386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.085396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.219 [2024-11-08 04:09:04.085404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.085414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.219 [2024-11-08 04:09:04.085438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.085465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.219 [2024-11-08 04:09:04.085474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.085484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.219 [2024-11-08 04:09:04.085521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.085534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.219 [2024-11-08 04:09:04.085543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.085554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.219 [2024-11-08 04:09:04.085563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.085575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.219 [2024-11-08 04:09:04.085585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.085596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.219 [2024-11-08 04:09:04.085606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.085617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.219 [2024-11-08 04:09:04.085626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.085638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.219 [2024-11-08 04:09:04.085646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.085658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.219 [2024-11-08 04:09:04.085667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.085678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.219 [2024-11-08 04:09:04.085687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.085698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.219 [2024-11-08 04:09:04.085711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.085723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.219 [2024-11-08 04:09:04.085732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.085743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.219 [2024-11-08 04:09:04.085752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.085762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.219 [2024-11-08 04:09:04.085772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.085782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.219 [2024-11-08 04:09:04.085791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.085802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.219 [2024-11-08 04:09:04.085811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.085822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.219 [2024-11-08 04:09:04.085846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.085856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.219 [2024-11-08 04:09:04.085866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.085877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.219 [2024-11-08 04:09:04.085886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.085897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.219 [2024-11-08 04:09:04.085906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.085916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.219 [2024-11-08 04:09:04.085940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.085949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.219 [2024-11-08 04:09:04.085957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.085967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.219 [2024-11-08 04:09:04.085975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.085985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.219 [2024-11-08 04:09:04.085993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.086003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.219 [2024-11-08 04:09:04.086011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.219 [2024-11-08 04:09:04.086021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.219 [2024-11-08 04:09:04.086029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.220 [2024-11-08 04:09:04.086039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.220 [2024-11-08 04:09:04.086052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.220 [2024-11-08 04:09:04.086063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.220 [2024-11-08 04:09:04.086071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.220 [2024-11-08 04:09:04.086081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.220 [2024-11-08 04:09:04.086090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.220 [2024-11-08 04:09:04.086100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.220 [2024-11-08 04:09:04.086108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.220 [2024-11-08 04:09:04.086118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.220 [2024-11-08 04:09:04.086126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.220 [2024-11-08 04:09:04.086136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.220 [2024-11-08 04:09:04.086145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.220 [2024-11-08 04:09:04.086155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.220 [2024-11-08 04:09:04.086163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.220 [2024-11-08 04:09:04.086173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.220 [2024-11-08 04:09:04.086181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.220 [2024-11-08 04:09:04.086190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.220 [2024-11-08 04:09:04.086199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.220 [2024-11-08 04:09:04.086208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.220 [2024-11-08 04:09:04.086217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.220 [2024-11-08 04:09:04.086226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.220 [2024-11-08 04:09:04.086235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.220 [2024-11-08 04:09:04.086245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.220 [2024-11-08 04:09:04.086253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.220 [2024-11-08 04:09:04.086263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.220 [2024-11-08 04:09:04.086271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.220 [2024-11-08 04:09:04.086281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.220 [2024-11-08 04:09:04.086290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.220 [2024-11-08 04:09:04.086300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.220 [2024-11-08 04:09:04.086308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.220 [2024-11-08 04:09:04.086317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.220 [2024-11-08 04:09:04.086325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.220 [2024-11-08 04:09:04.086335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.220 [2024-11-08 04:09:04.086348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.220 [2024-11-08 04:09:04.086358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.220 [2024-11-08 04:09:04.086366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.220 [2024-11-08 04:09:04.086376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.220 [2024-11-08 04:09:04.086384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.220 [2024-11-08 04:09:04.086394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.220 [2024-11-08 04:09:04.086402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.220 [2024-11-08 04:09:04.086412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.220 [2024-11-08 04:09:04.086444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.220 [2024-11-08 04:09:04.086455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.220 [2024-11-08 04:09:04.086473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.220 [2024-11-08 04:09:04.086486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.220 [2024-11-08 04:09:04.086494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.220 [2024-11-08 04:09:04.086505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.220 [2024-11-08 04:09:04.086514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.220 [2024-11-08 04:09:04.086525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.220 [2024-11-08 04:09:04.086534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.220 [2024-11-08 04:09:04.086545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.220 [2024-11-08 04:09:04.086554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.220 [2024-11-08 04:09:04.086570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.220 [2024-11-08 04:09:04.086580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.220 [2024-11-08 04:09:04.086590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.220 [2024-11-08 04:09:04.086599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.220 [2024-11-08 04:09:04.086610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.220 [2024-11-08 04:09:04.086619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.220 [2024-11-08 04:09:04.086629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.220 [2024-11-08 04:09:04.086638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.220 [2024-11-08 04:09:04.086649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.220 [2024-11-08 04:09:04.086658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.220 [2024-11-08 04:09:04.086669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.220 [2024-11-08 04:09:04.086677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.220 [2024-11-08 04:09:04.086687] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1820c00 is same with the state(5) to be set 00:25:29.220 [2024-11-08 04:09:04.086704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:29.220 [2024-11-08 04:09:04.086712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:29.220 [2024-11-08 04:09:04.086719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6200 len:8 PRP1 0x0 PRP2 0x0 00:25:29.220 [2024-11-08 04:09:04.086728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.220 [2024-11-08 04:09:04.086780] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1820c00 was disconnected and freed. reset controller. 00:25:29.220 [2024-11-08 04:09:04.086870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:29.220 [2024-11-08 04:09:04.086885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.220 [2024-11-08 04:09:04.086895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:29.220 [2024-11-08 04:09:04.086905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.220 [2024-11-08 04:09:04.086914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:29.220 [2024-11-08 04:09:04.086922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.220 [2024-11-08 04:09:04.086932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:29.220 [2024-11-08 04:09:04.086940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.220 [2024-11-08 04:09:04.086948] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17afdc0 is same with the state(5) to be set 00:25:29.220 [2024-11-08 04:09:04.087165] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.220 [2024-11-08 04:09:04.087193] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17afdc0 (9): Bad file descriptor 00:25:29.220 [2024-11-08 04:09:04.087291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.221 [2024-11-08 04:09:04.087338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.221 [2024-11-08 04:09:04.087354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17afdc0 with addr=10.0.0.2, port=4420 00:25:29.221 [2024-11-08 04:09:04.087368] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17afdc0 is same with the state(5) to be set 00:25:29.221 [2024-11-08 04:09:04.087386] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17afdc0 (9): Bad file descriptor 00:25:29.221 [2024-11-08 04:09:04.087402] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.221 [2024-11-08 04:09:04.087411] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.221 [2024-11-08 04:09:04.087452] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.221 [2024-11-08 04:09:04.087473] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.221 [2024-11-08 04:09:04.096545] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.221 04:09:04 -- host/timeout.sh@101 -- # sleep 3 00:25:30.205 [2024-11-08 04:09:05.096689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-11-08 04:09:05.096808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.205 [2024-11-08 04:09:05.096825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17afdc0 with addr=10.0.0.2, port=4420 00:25:30.205 [2024-11-08 04:09:05.096837] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17afdc0 is same with the state(5) to be set 00:25:30.205 [2024-11-08 04:09:05.096859] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17afdc0 (9): Bad file descriptor 00:25:30.205 [2024-11-08 04:09:05.096876] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.205 [2024-11-08 04:09:05.096886] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.205 [2024-11-08 04:09:05.096895] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.205 [2024-11-08 04:09:05.096918] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.205 [2024-11-08 04:09:05.096929] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:31.141 [2024-11-08 04:09:06.096990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.141 [2024-11-08 04:09:06.097065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.141 [2024-11-08 04:09:06.097080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17afdc0 with addr=10.0.0.2, port=4420 00:25:31.141 [2024-11-08 04:09:06.097090] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17afdc0 is same with the state(5) to be set 00:25:31.141 [2024-11-08 04:09:06.097106] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17afdc0 (9): Bad file descriptor 00:25:31.141 [2024-11-08 04:09:06.097121] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:31.141 [2024-11-08 04:09:06.097128] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:31.141 [2024-11-08 04:09:06.097136] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:31.141 [2024-11-08 04:09:06.097152] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.141 [2024-11-08 04:09:06.097161] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:32.078 [2024-11-08 04:09:07.097451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.078 [2024-11-08 04:09:07.097567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.078 [2024-11-08 04:09:07.097584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17afdc0 with addr=10.0.0.2, port=4420 00:25:32.078 [2024-11-08 04:09:07.097595] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17afdc0 is same with the state(5) to be set 00:25:32.078 [2024-11-08 04:09:07.097738] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17afdc0 (9): Bad file descriptor 00:25:32.078 [2024-11-08 04:09:07.097992] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:32.078 [2024-11-08 04:09:07.098014] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:32.078 [2024-11-08 04:09:07.098023] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:32.078 [2024-11-08 04:09:07.100219] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:32.078 [2024-11-08 04:09:07.100258] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:32.078 04:09:07 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:32.337 [2024-11-08 04:09:07.346827] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:32.337 04:09:07 -- host/timeout.sh@103 -- # wait 90256 00:25:33.272 [2024-11-08 04:09:08.121043] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:38.542 00:25:38.542 Latency(us) 00:25:38.542 [2024-11-08T04:09:13.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:38.542 [2024-11-08T04:09:13.653Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:38.542 Verification LBA range: start 0x0 length 0x4000 00:25:38.542 NVMe0n1 : 10.00 9350.08 36.52 7111.49 0.00 7763.98 532.48 3019898.88 00:25:38.542 [2024-11-08T04:09:13.653Z] =================================================================================================================== 00:25:38.542 [2024-11-08T04:09:13.653Z] Total : 9350.08 36.52 7111.49 0.00 7763.98 0.00 3019898.88 00:25:38.542 0 00:25:38.542 04:09:12 -- host/timeout.sh@105 -- # killprocess 90092 00:25:38.542 04:09:12 -- common/autotest_common.sh@936 -- # '[' -z 90092 ']' 00:25:38.542 04:09:12 -- common/autotest_common.sh@940 -- # kill -0 90092 00:25:38.542 04:09:12 -- common/autotest_common.sh@941 -- # uname 00:25:38.542 04:09:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:38.542 04:09:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90092 00:25:38.542 killing process with pid 90092 00:25:38.542 Received shutdown signal, test time was about 10.000000 seconds 00:25:38.542 00:25:38.542 Latency(us) 00:25:38.542 [2024-11-08T04:09:13.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:38.542 [2024-11-08T04:09:13.653Z] =================================================================================================================== 00:25:38.542 [2024-11-08T04:09:13.653Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:38.542 04:09:12 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:38.542 04:09:12 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:38.542 04:09:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90092' 00:25:38.542 04:09:12 -- common/autotest_common.sh@955 -- # kill 90092 00:25:38.542 04:09:12 -- common/autotest_common.sh@960 -- # wait 90092 00:25:38.542 04:09:13 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:25:38.542 04:09:13 -- host/timeout.sh@110 -- # bdevperf_pid=90377 00:25:38.542 04:09:13 -- host/timeout.sh@112 -- # waitforlisten 90377 /var/tmp/bdevperf.sock 00:25:38.542 04:09:13 -- common/autotest_common.sh@829 -- # '[' -z 90377 ']' 00:25:38.542 04:09:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:38.542 04:09:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:38.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:38.542 04:09:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:38.542 04:09:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:38.542 04:09:13 -- common/autotest_common.sh@10 -- # set +x 00:25:38.542 [2024-11-08 04:09:13.252791] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:38.542 [2024-11-08 04:09:13.252880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90377 ] 00:25:38.542 [2024-11-08 04:09:13.382082] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.542 [2024-11-08 04:09:13.474397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:39.479 04:09:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:39.479 04:09:14 -- common/autotest_common.sh@862 -- # return 0 00:25:39.479 04:09:14 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 90377 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:25:39.479 04:09:14 -- host/timeout.sh@116 -- # dtrace_pid=90405 00:25:39.479 04:09:14 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:25:39.479 04:09:14 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:25:39.737 NVMe0n1 00:25:39.737 04:09:14 -- host/timeout.sh@124 -- # rpc_pid=90463 00:25:39.737 04:09:14 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:39.737 04:09:14 -- host/timeout.sh@125 -- # sleep 1 00:25:39.737 Running I/O for 10 seconds... 00:25:40.672 04:09:15 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:40.933 [2024-11-08 04:09:15.946006] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946066] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946075] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946082] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946089] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946096] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946103] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946111] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946117] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946124] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946131] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946138] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946145] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946152] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946159] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946166] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946173] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946180] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946187] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946194] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946200] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946207] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946213] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946220] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946226] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946232] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946239] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946245] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946253] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946260] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946267] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946274] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946281] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946289] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946297] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946304] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946311] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946319] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946326] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946333] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946340] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946347] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946355] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946363] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946370] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946377] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213400 is same with the state(5) to be set 00:25:40.933 [2024-11-08 04:09:15.946704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:118080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.933 [2024-11-08 04:09:15.946736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.933 [2024-11-08 04:09:15.946757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:57608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.933 [2024-11-08 04:09:15.946766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.933 [2024-11-08 04:09:15.946792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.933 [2024-11-08 04:09:15.946800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.933 [2024-11-08 04:09:15.946810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:71248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.933 [2024-11-08 04:09:15.946819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.933 [2024-11-08 04:09:15.946845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:33104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.933 [2024-11-08 04:09:15.946853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.933 [2024-11-08 04:09:15.946862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:130680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.933 [2024-11-08 04:09:15.946870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.933 [2024-11-08 04:09:15.946879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.933 [2024-11-08 04:09:15.946887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.933 [2024-11-08 04:09:15.946896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.933 [2024-11-08 04:09:15.946904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.933 [2024-11-08 04:09:15.946913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:106320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.933 [2024-11-08 04:09:15.946921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.933 [2024-11-08 04:09:15.946930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.933 [2024-11-08 04:09:15.946938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.933 [2024-11-08 04:09:15.946947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.933 [2024-11-08 04:09:15.946954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.933 [2024-11-08 04:09:15.946963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:44160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.933 [2024-11-08 04:09:15.946971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.933 [2024-11-08 04:09:15.946980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:115368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.933 [2024-11-08 04:09:15.946987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.933 [2024-11-08 04:09:15.946996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.933 [2024-11-08 04:09:15.947004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.933 [2024-11-08 04:09:15.947013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.933 [2024-11-08 04:09:15.947021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.933 [2024-11-08 04:09:15.947030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.933 [2024-11-08 04:09:15.947037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.933 [2024-11-08 04:09:15.947047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:81008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.933 [2024-11-08 04:09:15.947058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.933 [2024-11-08 04:09:15.947068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:124160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.933 [2024-11-08 04:09:15.947075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.933 [2024-11-08 04:09:15.947085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.933 [2024-11-08 04:09:15.947093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.933 [2024-11-08 04:09:15.947102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:88640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.933 [2024-11-08 04:09:15.947110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.933 [2024-11-08 04:09:15.947119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.933 [2024-11-08 04:09:15.947127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.933 [2024-11-08 04:09:15.947136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.933 [2024-11-08 04:09:15.947143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.933 [2024-11-08 04:09:15.947152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:37072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.933 [2024-11-08 04:09:15.947160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.933 [2024-11-08 04:09:15.947169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.933 [2024-11-08 04:09:15.947176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.933 [2024-11-08 04:09:15.947186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:58048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.933 [2024-11-08 04:09:15.947194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.933 [2024-11-08 04:09:15.947204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.933 [2024-11-08 04:09:15.947212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.933 [2024-11-08 04:09:15.947221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.933 [2024-11-08 04:09:15.947229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.933 [2024-11-08 04:09:15.947238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.933 [2024-11-08 04:09:15.947245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.933 [2024-11-08 04:09:15.947254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:117984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:89960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:129144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:40040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:102296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:57568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:90616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:54744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:126520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:34824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:48088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:91528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:107048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:119752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:107328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:59224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:84480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:107112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:55944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:91216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:125584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:45328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:31400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.947988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:88664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.934 [2024-11-08 04:09:15.947995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.934 [2024-11-08 04:09:15.948005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:104096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:46824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:91248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:45192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:46248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:123640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:69840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:92432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:107976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:107456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:44888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:30560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:69312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:74512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:56384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:119688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:117768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:48136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:68256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:54136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:105920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:125792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:116168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:71416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.935 [2024-11-08 04:09:15.948694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.935 [2024-11-08 04:09:15.948703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.936 [2024-11-08 04:09:15.948711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.936 [2024-11-08 04:09:15.948721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:108192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.936 [2024-11-08 04:09:15.948729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.936 [2024-11-08 04:09:15.948738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.936 [2024-11-08 04:09:15.948746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.936 [2024-11-08 04:09:15.948760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:37208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.936 [2024-11-08 04:09:15.948768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.936 [2024-11-08 04:09:15.948793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:105600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.936 [2024-11-08 04:09:15.948801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.936 [2024-11-08 04:09:15.948810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.936 [2024-11-08 04:09:15.948818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.936 [2024-11-08 04:09:15.948827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:66104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.936 [2024-11-08 04:09:15.948835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.936 [2024-11-08 04:09:15.948844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:62960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.936 [2024-11-08 04:09:15.948851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.936 [2024-11-08 04:09:15.948861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:30160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.936 [2024-11-08 04:09:15.948868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.936 [2024-11-08 04:09:15.948877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:90728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.936 [2024-11-08 04:09:15.948885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.936 [2024-11-08 04:09:15.948894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.936 [2024-11-08 04:09:15.948902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.936 [2024-11-08 04:09:15.948911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:124456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.936 [2024-11-08 04:09:15.948919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.936 [2024-11-08 04:09:15.948928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.936 [2024-11-08 04:09:15.948935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.936 [2024-11-08 04:09:15.948944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.936 [2024-11-08 04:09:15.948952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.936 [2024-11-08 04:09:15.948967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.936 [2024-11-08 04:09:15.948974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.936 [2024-11-08 04:09:15.948984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:29376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.936 [2024-11-08 04:09:15.948991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.936 [2024-11-08 04:09:15.949000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:119320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.936 [2024-11-08 04:09:15.949008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.936 [2024-11-08 04:09:15.949017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.936 [2024-11-08 04:09:15.949025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.936 [2024-11-08 04:09:15.949034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.936 [2024-11-08 04:09:15.949042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.936 [2024-11-08 04:09:15.949057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.936 [2024-11-08 04:09:15.949065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.936 [2024-11-08 04:09:15.949074] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc71050 is same with the state(5) to be set 00:25:40.936 [2024-11-08 04:09:15.949084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:40.936 [2024-11-08 04:09:15.949091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:40.936 [2024-11-08 04:09:15.949098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117728 len:8 PRP1 0x0 PRP2 0x0 00:25:40.936 [2024-11-08 04:09:15.949106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.936 [2024-11-08 04:09:15.949157] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc71050 was disconnected and freed. reset controller. 00:25:40.936 [2024-11-08 04:09:15.949392] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:40.936 [2024-11-08 04:09:15.949545] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbfbdc0 (9): Bad file descriptor 00:25:40.936 [2024-11-08 04:09:15.949675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.936 [2024-11-08 04:09:15.949723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.936 [2024-11-08 04:09:15.949739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbfbdc0 with addr=10.0.0.2, port=4420 00:25:40.936 [2024-11-08 04:09:15.949750] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbdc0 is same with the state(5) to be set 00:25:40.936 [2024-11-08 04:09:15.949767] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbfbdc0 (9): Bad file descriptor 00:25:40.936 [2024-11-08 04:09:15.949799] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:40.936 [2024-11-08 04:09:15.949808] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:40.936 [2024-11-08 04:09:15.949819] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:40.936 [2024-11-08 04:09:15.949838] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.936 [2024-11-08 04:09:15.949848] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:40.936 04:09:15 -- host/timeout.sh@128 -- # wait 90463 00:25:43.464 [2024-11-08 04:09:17.949995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.464 [2024-11-08 04:09:17.950075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.464 [2024-11-08 04:09:17.950093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbfbdc0 with addr=10.0.0.2, port=4420 00:25:43.464 [2024-11-08 04:09:17.950104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbdc0 is same with the state(5) to be set 00:25:43.464 [2024-11-08 04:09:17.950132] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbfbdc0 (9): Bad file descriptor 00:25:43.464 [2024-11-08 04:09:17.950150] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.464 [2024-11-08 04:09:17.950159] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.464 [2024-11-08 04:09:17.950169] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.464 [2024-11-08 04:09:17.950189] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.464 [2024-11-08 04:09:17.950200] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.364 [2024-11-08 04:09:19.950285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.364 [2024-11-08 04:09:19.950362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.364 [2024-11-08 04:09:19.950379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbfbdc0 with addr=10.0.0.2, port=4420 00:25:45.364 [2024-11-08 04:09:19.950389] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbdc0 is same with the state(5) to be set 00:25:45.364 [2024-11-08 04:09:19.950407] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbfbdc0 (9): Bad file descriptor 00:25:45.364 [2024-11-08 04:09:19.950422] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.364 [2024-11-08 04:09:19.950442] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.364 [2024-11-08 04:09:19.950468] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.364 [2024-11-08 04:09:19.950494] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.364 [2024-11-08 04:09:19.950504] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.267 [2024-11-08 04:09:21.950551] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.267 [2024-11-08 04:09:21.950593] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.267 [2024-11-08 04:09:21.950603] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.267 [2024-11-08 04:09:21.950611] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:25:47.267 [2024-11-08 04:09:21.950629] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.203 00:25:48.203 Latency(us) 00:25:48.203 [2024-11-08T04:09:23.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:48.203 [2024-11-08T04:09:23.314Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:25:48.203 NVMe0n1 : 8.13 3377.58 13.19 15.74 0.00 37681.09 2859.75 7015926.69 00:25:48.203 [2024-11-08T04:09:23.314Z] =================================================================================================================== 00:25:48.203 [2024-11-08T04:09:23.314Z] Total : 3377.58 13.19 15.74 0.00 37681.09 2859.75 7015926.69 00:25:48.203 0 00:25:48.203 04:09:22 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:48.203 Attaching 5 probes... 00:25:48.203 1217.112630: reset bdev controller NVMe0 00:25:48.203 1217.316430: reconnect bdev controller NVMe0 00:25:48.203 3217.611876: reconnect delay bdev controller NVMe0 00:25:48.203 3217.644597: reconnect bdev controller NVMe0 00:25:48.203 5217.949092: reconnect delay bdev controller NVMe0 00:25:48.203 5217.963627: reconnect bdev controller NVMe0 00:25:48.203 7218.251386: reconnect delay bdev controller NVMe0 00:25:48.203 7218.263416: reconnect bdev controller NVMe0 00:25:48.203 04:09:22 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:25:48.203 04:09:22 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:25:48.203 04:09:22 -- host/timeout.sh@136 -- # kill 90405 00:25:48.203 04:09:22 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:48.203 04:09:22 -- host/timeout.sh@139 -- # killprocess 90377 00:25:48.203 04:09:22 -- common/autotest_common.sh@936 -- # '[' -z 90377 ']' 00:25:48.203 04:09:22 -- common/autotest_common.sh@940 -- # kill -0 90377 00:25:48.203 04:09:22 -- common/autotest_common.sh@941 -- # uname 00:25:48.203 04:09:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:48.203 04:09:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90377 00:25:48.203 killing process with pid 90377 00:25:48.203 Received shutdown signal, test time was about 8.203331 seconds 00:25:48.203 00:25:48.203 Latency(us) 00:25:48.203 [2024-11-08T04:09:23.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:48.203 [2024-11-08T04:09:23.314Z] =================================================================================================================== 00:25:48.203 [2024-11-08T04:09:23.314Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:48.203 04:09:23 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:48.203 04:09:23 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:48.203 04:09:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90377' 00:25:48.203 04:09:23 -- common/autotest_common.sh@955 -- # kill 90377 00:25:48.203 04:09:23 -- common/autotest_common.sh@960 -- # wait 90377 00:25:48.203 04:09:23 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:48.462 04:09:23 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:25:48.462 04:09:23 -- host/timeout.sh@145 -- # nvmftestfini 00:25:48.462 04:09:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:48.462 04:09:23 -- nvmf/common.sh@116 -- # sync 00:25:48.462 04:09:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:48.462 04:09:23 -- nvmf/common.sh@119 -- # set +e 00:25:48.462 04:09:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:48.462 04:09:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:48.462 rmmod nvme_tcp 00:25:48.720 rmmod nvme_fabrics 00:25:48.720 rmmod nvme_keyring 00:25:48.720 04:09:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:48.720 04:09:23 -- nvmf/common.sh@123 -- # set -e 00:25:48.720 04:09:23 -- nvmf/common.sh@124 -- # return 0 00:25:48.720 04:09:23 -- nvmf/common.sh@477 -- # '[' -n 89800 ']' 00:25:48.720 04:09:23 -- nvmf/common.sh@478 -- # killprocess 89800 00:25:48.720 04:09:23 -- common/autotest_common.sh@936 -- # '[' -z 89800 ']' 00:25:48.720 04:09:23 -- common/autotest_common.sh@940 -- # kill -0 89800 00:25:48.720 04:09:23 -- common/autotest_common.sh@941 -- # uname 00:25:48.720 04:09:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:48.720 04:09:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89800 00:25:48.720 killing process with pid 89800 00:25:48.720 04:09:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:48.720 04:09:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:48.720 04:09:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89800' 00:25:48.720 04:09:23 -- common/autotest_common.sh@955 -- # kill 89800 00:25:48.720 04:09:23 -- common/autotest_common.sh@960 -- # wait 89800 00:25:48.979 04:09:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:48.979 04:09:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:48.979 04:09:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:48.979 04:09:23 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:48.979 04:09:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:48.979 04:09:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.979 04:09:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:48.979 04:09:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.979 04:09:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:48.979 00:25:48.979 real 0m46.896s 00:25:48.979 user 2m16.449s 00:25:48.979 sys 0m5.602s 00:25:48.979 04:09:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:48.979 ************************************ 00:25:48.979 END TEST nvmf_timeout 00:25:48.979 ************************************ 00:25:48.979 04:09:24 -- common/autotest_common.sh@10 -- # set +x 00:25:48.979 04:09:24 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:25:48.979 04:09:24 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:25:48.979 04:09:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:48.979 04:09:24 -- common/autotest_common.sh@10 -- # set +x 00:25:49.238 04:09:24 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:25:49.238 00:25:49.238 real 18m44.611s 00:25:49.238 user 60m11.230s 00:25:49.238 sys 3m45.261s 00:25:49.238 04:09:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:49.238 ************************************ 00:25:49.238 END TEST nvmf_tcp 00:25:49.238 ************************************ 00:25:49.238 04:09:24 -- common/autotest_common.sh@10 -- # set +x 00:25:49.238 04:09:24 -- spdk/autotest.sh@283 -- # [[ 0 -eq 0 ]] 00:25:49.238 04:09:24 -- spdk/autotest.sh@284 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:49.238 04:09:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:49.238 04:09:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:49.238 04:09:24 -- common/autotest_common.sh@10 -- # set +x 00:25:49.238 ************************************ 00:25:49.238 START TEST spdkcli_nvmf_tcp 00:25:49.238 ************************************ 00:25:49.238 04:09:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:49.238 * Looking for test storage... 00:25:49.238 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:25:49.238 04:09:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:49.238 04:09:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:49.238 04:09:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:49.497 04:09:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:49.497 04:09:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:49.497 04:09:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:49.497 04:09:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:49.497 04:09:24 -- scripts/common.sh@335 -- # IFS=.-: 00:25:49.497 04:09:24 -- scripts/common.sh@335 -- # read -ra ver1 00:25:49.497 04:09:24 -- scripts/common.sh@336 -- # IFS=.-: 00:25:49.497 04:09:24 -- scripts/common.sh@336 -- # read -ra ver2 00:25:49.497 04:09:24 -- scripts/common.sh@337 -- # local 'op=<' 00:25:49.497 04:09:24 -- scripts/common.sh@339 -- # ver1_l=2 00:25:49.497 04:09:24 -- scripts/common.sh@340 -- # ver2_l=1 00:25:49.497 04:09:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:49.497 04:09:24 -- scripts/common.sh@343 -- # case "$op" in 00:25:49.497 04:09:24 -- scripts/common.sh@344 -- # : 1 00:25:49.497 04:09:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:49.497 04:09:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:49.497 04:09:24 -- scripts/common.sh@364 -- # decimal 1 00:25:49.497 04:09:24 -- scripts/common.sh@352 -- # local d=1 00:25:49.497 04:09:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:49.497 04:09:24 -- scripts/common.sh@354 -- # echo 1 00:25:49.497 04:09:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:49.497 04:09:24 -- scripts/common.sh@365 -- # decimal 2 00:25:49.497 04:09:24 -- scripts/common.sh@352 -- # local d=2 00:25:49.497 04:09:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:49.497 04:09:24 -- scripts/common.sh@354 -- # echo 2 00:25:49.497 04:09:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:49.497 04:09:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:49.497 04:09:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:49.497 04:09:24 -- scripts/common.sh@367 -- # return 0 00:25:49.497 04:09:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:49.497 04:09:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:49.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.497 --rc genhtml_branch_coverage=1 00:25:49.497 --rc genhtml_function_coverage=1 00:25:49.497 --rc genhtml_legend=1 00:25:49.497 --rc geninfo_all_blocks=1 00:25:49.497 --rc geninfo_unexecuted_blocks=1 00:25:49.497 00:25:49.497 ' 00:25:49.497 04:09:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:49.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.497 --rc genhtml_branch_coverage=1 00:25:49.497 --rc genhtml_function_coverage=1 00:25:49.497 --rc genhtml_legend=1 00:25:49.497 --rc geninfo_all_blocks=1 00:25:49.497 --rc geninfo_unexecuted_blocks=1 00:25:49.497 00:25:49.497 ' 00:25:49.497 04:09:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:49.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.497 --rc genhtml_branch_coverage=1 00:25:49.497 --rc genhtml_function_coverage=1 00:25:49.497 --rc genhtml_legend=1 00:25:49.497 --rc geninfo_all_blocks=1 00:25:49.497 --rc geninfo_unexecuted_blocks=1 00:25:49.497 00:25:49.497 ' 00:25:49.497 04:09:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:49.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.497 --rc genhtml_branch_coverage=1 00:25:49.497 --rc genhtml_function_coverage=1 00:25:49.497 --rc genhtml_legend=1 00:25:49.497 --rc geninfo_all_blocks=1 00:25:49.497 --rc geninfo_unexecuted_blocks=1 00:25:49.497 00:25:49.497 ' 00:25:49.498 04:09:24 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:25:49.498 04:09:24 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:25:49.498 04:09:24 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:25:49.498 04:09:24 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:49.498 04:09:24 -- nvmf/common.sh@7 -- # uname -s 00:25:49.498 04:09:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:49.498 04:09:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:49.498 04:09:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:49.498 04:09:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:49.498 04:09:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:49.498 04:09:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:49.498 04:09:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:49.498 04:09:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:49.498 04:09:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:49.498 04:09:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:49.498 04:09:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:25:49.498 04:09:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:25:49.498 04:09:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:49.498 04:09:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:49.498 04:09:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:49.498 04:09:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:49.498 04:09:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:49.498 04:09:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:49.498 04:09:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:49.498 04:09:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.498 04:09:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.498 04:09:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.498 04:09:24 -- paths/export.sh@5 -- # export PATH 00:25:49.498 04:09:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.498 04:09:24 -- nvmf/common.sh@46 -- # : 0 00:25:49.498 04:09:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:49.498 04:09:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:49.498 04:09:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:49.498 04:09:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:49.498 04:09:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:49.498 04:09:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:49.498 04:09:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:49.498 04:09:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:49.498 04:09:24 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:49.498 04:09:24 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:49.498 04:09:24 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:49.498 04:09:24 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:49.498 04:09:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:49.498 04:09:24 -- common/autotest_common.sh@10 -- # set +x 00:25:49.498 04:09:24 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:49.498 04:09:24 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=90688 00:25:49.498 04:09:24 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:49.498 04:09:24 -- spdkcli/common.sh@34 -- # waitforlisten 90688 00:25:49.498 04:09:24 -- common/autotest_common.sh@829 -- # '[' -z 90688 ']' 00:25:49.498 04:09:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.498 04:09:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:49.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.498 04:09:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.498 04:09:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:49.498 04:09:24 -- common/autotest_common.sh@10 -- # set +x 00:25:49.498 [2024-11-08 04:09:24.453817] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:49.498 [2024-11-08 04:09:24.453923] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90688 ] 00:25:49.498 [2024-11-08 04:09:24.592127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:49.757 [2024-11-08 04:09:24.677042] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:49.757 [2024-11-08 04:09:24.677337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:49.757 [2024-11-08 04:09:24.677342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.692 04:09:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:50.692 04:09:25 -- common/autotest_common.sh@862 -- # return 0 00:25:50.692 04:09:25 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:50.692 04:09:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:50.692 04:09:25 -- common/autotest_common.sh@10 -- # set +x 00:25:50.692 04:09:25 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:50.692 04:09:25 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:50.692 04:09:25 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:50.692 04:09:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:50.692 04:09:25 -- common/autotest_common.sh@10 -- # set +x 00:25:50.692 04:09:25 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:50.692 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:50.692 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:50.692 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:50.692 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:50.692 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:50.692 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:50.692 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:50.692 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:50.692 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:50.692 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:50.692 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:50.692 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:50.692 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:50.692 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:50.692 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:50.692 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:50.692 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:50.692 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:50.692 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:50.692 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:50.692 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:50.692 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:50.692 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:50.692 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:50.692 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:50.692 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:50.692 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:50.692 ' 00:25:50.950 [2024-11-08 04:09:25.978899] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:53.482 [2024-11-08 04:09:28.261751] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:54.859 [2024-11-08 04:09:29.551330] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:57.390 [2024-11-08 04:09:31.946267] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:59.292 [2024-11-08 04:09:33.996820] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:26:00.667 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:26:00.667 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:26:00.667 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:26:00.667 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:26:00.667 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:26:00.667 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:26:00.667 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:26:00.667 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:00.667 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:26:00.667 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:26:00.667 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:00.667 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:00.667 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:26:00.667 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:00.667 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:00.667 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:26:00.667 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:00.667 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:00.667 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:00.667 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:00.667 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:26:00.667 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:26:00.667 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:00.667 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:26:00.667 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:00.667 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:26:00.667 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:26:00.667 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:26:00.667 04:09:35 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:26:00.667 04:09:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:00.667 04:09:35 -- common/autotest_common.sh@10 -- # set +x 00:26:00.667 04:09:35 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:26:00.667 04:09:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:00.667 04:09:35 -- common/autotest_common.sh@10 -- # set +x 00:26:00.667 04:09:35 -- spdkcli/nvmf.sh@69 -- # check_match 00:26:00.667 04:09:35 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:26:01.235 04:09:36 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:26:01.235 04:09:36 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:26:01.235 04:09:36 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:26:01.235 04:09:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:01.235 04:09:36 -- common/autotest_common.sh@10 -- # set +x 00:26:01.235 04:09:36 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:26:01.235 04:09:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:01.235 04:09:36 -- common/autotest_common.sh@10 -- # set +x 00:26:01.235 04:09:36 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:26:01.235 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:26:01.235 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:01.235 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:26:01.235 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:26:01.235 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:26:01.235 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:26:01.235 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:01.235 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:26:01.235 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:26:01.235 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:26:01.235 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:26:01.235 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:26:01.235 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:26:01.235 ' 00:26:07.802 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:26:07.802 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:26:07.802 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:07.802 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:26:07.802 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:26:07.802 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:26:07.802 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:26:07.802 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:07.802 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:26:07.803 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:26:07.803 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:26:07.803 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:26:07.803 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:26:07.803 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:26:07.803 04:09:41 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:26:07.803 04:09:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:07.803 04:09:41 -- common/autotest_common.sh@10 -- # set +x 00:26:07.803 04:09:41 -- spdkcli/nvmf.sh@90 -- # killprocess 90688 00:26:07.803 04:09:41 -- common/autotest_common.sh@936 -- # '[' -z 90688 ']' 00:26:07.803 04:09:41 -- common/autotest_common.sh@940 -- # kill -0 90688 00:26:07.803 04:09:41 -- common/autotest_common.sh@941 -- # uname 00:26:07.803 04:09:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:07.803 04:09:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90688 00:26:07.803 04:09:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:07.803 killing process with pid 90688 00:26:07.803 04:09:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:07.803 04:09:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90688' 00:26:07.803 04:09:41 -- common/autotest_common.sh@955 -- # kill 90688 00:26:07.803 [2024-11-08 04:09:41.831292] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:07.803 04:09:41 -- common/autotest_common.sh@960 -- # wait 90688 00:26:07.803 04:09:42 -- spdkcli/nvmf.sh@1 -- # cleanup 00:26:07.803 04:09:42 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:26:07.803 04:09:42 -- spdkcli/common.sh@13 -- # '[' -n 90688 ']' 00:26:07.803 04:09:42 -- spdkcli/common.sh@14 -- # killprocess 90688 00:26:07.803 04:09:42 -- common/autotest_common.sh@936 -- # '[' -z 90688 ']' 00:26:07.803 04:09:42 -- common/autotest_common.sh@940 -- # kill -0 90688 00:26:07.803 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (90688) - No such process 00:26:07.803 Process with pid 90688 is not found 00:26:07.803 04:09:42 -- common/autotest_common.sh@963 -- # echo 'Process with pid 90688 is not found' 00:26:07.803 04:09:42 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:26:07.803 04:09:42 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:26:07.803 04:09:42 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:26:07.803 00:26:07.803 real 0m17.966s 00:26:07.803 user 0m38.855s 00:26:07.803 sys 0m0.922s 00:26:07.803 04:09:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:07.803 04:09:42 -- common/autotest_common.sh@10 -- # set +x 00:26:07.803 ************************************ 00:26:07.803 END TEST spdkcli_nvmf_tcp 00:26:07.803 ************************************ 00:26:07.803 04:09:42 -- spdk/autotest.sh@285 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:07.803 04:09:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:07.803 04:09:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:07.803 04:09:42 -- common/autotest_common.sh@10 -- # set +x 00:26:07.803 ************************************ 00:26:07.803 START TEST nvmf_identify_passthru 00:26:07.803 ************************************ 00:26:07.803 04:09:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:07.803 * Looking for test storage... 00:26:07.803 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:07.803 04:09:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:07.803 04:09:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:07.803 04:09:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:07.803 04:09:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:07.803 04:09:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:07.803 04:09:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:07.803 04:09:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:07.803 04:09:42 -- scripts/common.sh@335 -- # IFS=.-: 00:26:07.803 04:09:42 -- scripts/common.sh@335 -- # read -ra ver1 00:26:07.803 04:09:42 -- scripts/common.sh@336 -- # IFS=.-: 00:26:07.803 04:09:42 -- scripts/common.sh@336 -- # read -ra ver2 00:26:07.803 04:09:42 -- scripts/common.sh@337 -- # local 'op=<' 00:26:07.803 04:09:42 -- scripts/common.sh@339 -- # ver1_l=2 00:26:07.803 04:09:42 -- scripts/common.sh@340 -- # ver2_l=1 00:26:07.803 04:09:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:07.803 04:09:42 -- scripts/common.sh@343 -- # case "$op" in 00:26:07.803 04:09:42 -- scripts/common.sh@344 -- # : 1 00:26:07.803 04:09:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:07.803 04:09:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:07.803 04:09:42 -- scripts/common.sh@364 -- # decimal 1 00:26:07.803 04:09:42 -- scripts/common.sh@352 -- # local d=1 00:26:07.803 04:09:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:07.803 04:09:42 -- scripts/common.sh@354 -- # echo 1 00:26:07.803 04:09:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:07.803 04:09:42 -- scripts/common.sh@365 -- # decimal 2 00:26:07.803 04:09:42 -- scripts/common.sh@352 -- # local d=2 00:26:07.803 04:09:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:07.803 04:09:42 -- scripts/common.sh@354 -- # echo 2 00:26:07.803 04:09:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:07.803 04:09:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:07.803 04:09:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:07.803 04:09:42 -- scripts/common.sh@367 -- # return 0 00:26:07.803 04:09:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:07.803 04:09:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:07.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.803 --rc genhtml_branch_coverage=1 00:26:07.803 --rc genhtml_function_coverage=1 00:26:07.803 --rc genhtml_legend=1 00:26:07.803 --rc geninfo_all_blocks=1 00:26:07.803 --rc geninfo_unexecuted_blocks=1 00:26:07.803 00:26:07.803 ' 00:26:07.803 04:09:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:07.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.803 --rc genhtml_branch_coverage=1 00:26:07.803 --rc genhtml_function_coverage=1 00:26:07.803 --rc genhtml_legend=1 00:26:07.803 --rc geninfo_all_blocks=1 00:26:07.803 --rc geninfo_unexecuted_blocks=1 00:26:07.803 00:26:07.803 ' 00:26:07.803 04:09:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:07.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.803 --rc genhtml_branch_coverage=1 00:26:07.803 --rc genhtml_function_coverage=1 00:26:07.803 --rc genhtml_legend=1 00:26:07.803 --rc geninfo_all_blocks=1 00:26:07.803 --rc geninfo_unexecuted_blocks=1 00:26:07.803 00:26:07.803 ' 00:26:07.803 04:09:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:07.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.803 --rc genhtml_branch_coverage=1 00:26:07.803 --rc genhtml_function_coverage=1 00:26:07.803 --rc genhtml_legend=1 00:26:07.803 --rc geninfo_all_blocks=1 00:26:07.803 --rc geninfo_unexecuted_blocks=1 00:26:07.803 00:26:07.803 ' 00:26:07.803 04:09:42 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:07.803 04:09:42 -- nvmf/common.sh@7 -- # uname -s 00:26:07.803 04:09:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:07.803 04:09:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:07.803 04:09:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:07.803 04:09:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:07.803 04:09:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:07.803 04:09:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:07.803 04:09:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:07.803 04:09:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:07.803 04:09:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:07.803 04:09:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:07.803 04:09:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:26:07.803 04:09:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:26:07.803 04:09:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:07.803 04:09:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:07.803 04:09:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:07.803 04:09:42 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:07.803 04:09:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:07.803 04:09:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:07.803 04:09:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:07.803 04:09:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.803 04:09:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.803 04:09:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.803 04:09:42 -- paths/export.sh@5 -- # export PATH 00:26:07.803 04:09:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.803 04:09:42 -- nvmf/common.sh@46 -- # : 0 00:26:07.803 04:09:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:07.803 04:09:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:07.803 04:09:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:07.804 04:09:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:07.804 04:09:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:07.804 04:09:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:07.804 04:09:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:07.804 04:09:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:07.804 04:09:42 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:07.804 04:09:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:07.804 04:09:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:07.804 04:09:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:07.804 04:09:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.804 04:09:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.804 04:09:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.804 04:09:42 -- paths/export.sh@5 -- # export PATH 00:26:07.804 04:09:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.804 04:09:42 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:26:07.804 04:09:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:07.804 04:09:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:07.804 04:09:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:07.804 04:09:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:07.804 04:09:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:07.804 04:09:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:07.804 04:09:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:07.804 04:09:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:07.804 04:09:42 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:07.804 04:09:42 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:07.804 04:09:42 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:07.804 04:09:42 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:07.804 04:09:42 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:07.804 04:09:42 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:07.804 04:09:42 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:07.804 04:09:42 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:07.804 04:09:42 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:07.804 04:09:42 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:07.804 04:09:42 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:07.804 04:09:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:07.804 04:09:42 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:07.804 04:09:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:07.804 04:09:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:07.804 04:09:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:07.804 04:09:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:07.804 04:09:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:07.804 04:09:42 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:07.804 04:09:42 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:07.804 Cannot find device "nvmf_tgt_br" 00:26:07.804 04:09:42 -- nvmf/common.sh@154 -- # true 00:26:07.804 04:09:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:07.804 Cannot find device "nvmf_tgt_br2" 00:26:07.804 04:09:42 -- nvmf/common.sh@155 -- # true 00:26:07.804 04:09:42 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:07.804 04:09:42 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:07.804 Cannot find device "nvmf_tgt_br" 00:26:07.804 04:09:42 -- nvmf/common.sh@157 -- # true 00:26:07.804 04:09:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:07.804 Cannot find device "nvmf_tgt_br2" 00:26:07.804 04:09:42 -- nvmf/common.sh@158 -- # true 00:26:07.804 04:09:42 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:07.804 04:09:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:07.804 04:09:42 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:07.804 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:07.804 04:09:42 -- nvmf/common.sh@161 -- # true 00:26:07.804 04:09:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:07.804 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:07.804 04:09:42 -- nvmf/common.sh@162 -- # true 00:26:07.804 04:09:42 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:07.804 04:09:42 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:07.804 04:09:42 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:07.804 04:09:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:07.804 04:09:42 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:07.804 04:09:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:07.804 04:09:42 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:07.804 04:09:42 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:07.804 04:09:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:07.804 04:09:42 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:07.804 04:09:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:07.804 04:09:42 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:07.804 04:09:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:07.804 04:09:42 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:07.804 04:09:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:07.804 04:09:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:07.804 04:09:42 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:07.804 04:09:42 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:07.804 04:09:42 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:07.804 04:09:42 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:07.804 04:09:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:07.804 04:09:42 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:07.804 04:09:42 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:07.804 04:09:42 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:07.804 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:07.804 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:26:07.804 00:26:07.804 --- 10.0.0.2 ping statistics --- 00:26:07.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.804 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:26:07.804 04:09:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:07.804 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:07.804 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:26:07.804 00:26:07.804 --- 10.0.0.3 ping statistics --- 00:26:07.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.804 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:26:07.804 04:09:42 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:07.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:07.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:26:07.804 00:26:07.804 --- 10.0.0.1 ping statistics --- 00:26:07.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.804 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:26:07.804 04:09:42 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:07.804 04:09:42 -- nvmf/common.sh@421 -- # return 0 00:26:07.804 04:09:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:07.804 04:09:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:07.804 04:09:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:07.804 04:09:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:07.804 04:09:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:07.804 04:09:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:07.804 04:09:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:07.804 04:09:42 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:26:07.804 04:09:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:07.804 04:09:42 -- common/autotest_common.sh@10 -- # set +x 00:26:07.804 04:09:42 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:26:07.804 04:09:42 -- common/autotest_common.sh@1519 -- # bdfs=() 00:26:07.804 04:09:42 -- common/autotest_common.sh@1519 -- # local bdfs 00:26:07.804 04:09:42 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:26:07.804 04:09:42 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:26:07.804 04:09:42 -- common/autotest_common.sh@1508 -- # bdfs=() 00:26:07.804 04:09:42 -- common/autotest_common.sh@1508 -- # local bdfs 00:26:07.804 04:09:42 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:07.804 04:09:42 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:26:07.804 04:09:42 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:26:07.804 04:09:42 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:26:07.804 04:09:42 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:26:07.804 04:09:42 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:26:07.804 04:09:42 -- target/identify_passthru.sh@16 -- # bdf=0000:00:06.0 00:26:07.804 04:09:42 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:06.0 ']' 00:26:07.804 04:09:42 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:26:07.804 04:09:42 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:26:07.805 04:09:42 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:26:08.063 04:09:43 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:26:08.063 04:09:43 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:26:08.063 04:09:43 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:26:08.063 04:09:43 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:26:08.322 04:09:43 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:26:08.322 04:09:43 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:26:08.322 04:09:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:08.322 04:09:43 -- common/autotest_common.sh@10 -- # set +x 00:26:08.322 04:09:43 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:26:08.322 04:09:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:08.322 04:09:43 -- common/autotest_common.sh@10 -- # set +x 00:26:08.322 04:09:43 -- target/identify_passthru.sh@31 -- # nvmfpid=91195 00:26:08.322 04:09:43 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:08.322 04:09:43 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:08.322 04:09:43 -- target/identify_passthru.sh@35 -- # waitforlisten 91195 00:26:08.322 04:09:43 -- common/autotest_common.sh@829 -- # '[' -z 91195 ']' 00:26:08.322 04:09:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:08.322 04:09:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:08.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:08.322 04:09:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:08.322 04:09:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:08.322 04:09:43 -- common/autotest_common.sh@10 -- # set +x 00:26:08.322 [2024-11-08 04:09:43.276414] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:08.322 [2024-11-08 04:09:43.276865] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:08.322 [2024-11-08 04:09:43.406988] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:08.581 [2024-11-08 04:09:43.498888] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:08.581 [2024-11-08 04:09:43.499035] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:08.581 [2024-11-08 04:09:43.499047] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:08.581 [2024-11-08 04:09:43.499056] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:08.581 [2024-11-08 04:09:43.499222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:08.581 [2024-11-08 04:09:43.499390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:08.581 [2024-11-08 04:09:43.500143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:08.581 [2024-11-08 04:09:43.500208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:08.581 04:09:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:08.581 04:09:43 -- common/autotest_common.sh@862 -- # return 0 00:26:08.581 04:09:43 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:26:08.581 04:09:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.581 04:09:43 -- common/autotest_common.sh@10 -- # set +x 00:26:08.581 04:09:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.581 04:09:43 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:26:08.581 04:09:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.581 04:09:43 -- common/autotest_common.sh@10 -- # set +x 00:26:08.581 [2024-11-08 04:09:43.656578] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:26:08.581 04:09:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.581 04:09:43 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:08.581 04:09:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.581 04:09:43 -- common/autotest_common.sh@10 -- # set +x 00:26:08.581 [2024-11-08 04:09:43.670754] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:08.581 04:09:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.581 04:09:43 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:26:08.581 04:09:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:08.581 04:09:43 -- common/autotest_common.sh@10 -- # set +x 00:26:08.839 04:09:43 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:26:08.839 04:09:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.839 04:09:43 -- common/autotest_common.sh@10 -- # set +x 00:26:08.839 Nvme0n1 00:26:08.839 04:09:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.839 04:09:43 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:26:08.839 04:09:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.839 04:09:43 -- common/autotest_common.sh@10 -- # set +x 00:26:08.839 04:09:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.839 04:09:43 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:08.840 04:09:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.840 04:09:43 -- common/autotest_common.sh@10 -- # set +x 00:26:08.840 04:09:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.840 04:09:43 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:08.840 04:09:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.840 04:09:43 -- common/autotest_common.sh@10 -- # set +x 00:26:08.840 [2024-11-08 04:09:43.812552] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:08.840 04:09:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.840 04:09:43 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:26:08.840 04:09:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.840 04:09:43 -- common/autotest_common.sh@10 -- # set +x 00:26:08.840 [2024-11-08 04:09:43.820348] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:08.840 [ 00:26:08.840 { 00:26:08.840 "allow_any_host": true, 00:26:08.840 "hosts": [], 00:26:08.840 "listen_addresses": [], 00:26:08.840 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:08.840 "subtype": "Discovery" 00:26:08.840 }, 00:26:08.840 { 00:26:08.840 "allow_any_host": true, 00:26:08.840 "hosts": [], 00:26:08.840 "listen_addresses": [ 00:26:08.840 { 00:26:08.840 "adrfam": "IPv4", 00:26:08.840 "traddr": "10.0.0.2", 00:26:08.840 "transport": "TCP", 00:26:08.840 "trsvcid": "4420", 00:26:08.840 "trtype": "TCP" 00:26:08.840 } 00:26:08.840 ], 00:26:08.840 "max_cntlid": 65519, 00:26:08.840 "max_namespaces": 1, 00:26:08.840 "min_cntlid": 1, 00:26:08.840 "model_number": "SPDK bdev Controller", 00:26:08.840 "namespaces": [ 00:26:08.840 { 00:26:08.840 "bdev_name": "Nvme0n1", 00:26:08.840 "name": "Nvme0n1", 00:26:08.840 "nguid": "3009A4720CFB4987914B8810A8B6F78A", 00:26:08.840 "nsid": 1, 00:26:08.840 "uuid": "3009a472-0cfb-4987-914b-8810a8b6f78a" 00:26:08.840 } 00:26:08.840 ], 00:26:08.840 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:08.840 "serial_number": "SPDK00000000000001", 00:26:08.840 "subtype": "NVMe" 00:26:08.840 } 00:26:08.840 ] 00:26:08.840 04:09:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.840 04:09:43 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:08.840 04:09:43 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:26:08.840 04:09:43 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:26:09.098 04:09:44 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:26:09.098 04:09:44 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:09.098 04:09:44 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:26:09.098 04:09:44 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:26:09.357 04:09:44 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:26:09.357 04:09:44 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:26:09.357 04:09:44 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:26:09.357 04:09:44 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:09.357 04:09:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.357 04:09:44 -- common/autotest_common.sh@10 -- # set +x 00:26:09.357 04:09:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.357 04:09:44 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:26:09.357 04:09:44 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:26:09.357 04:09:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:09.357 04:09:44 -- nvmf/common.sh@116 -- # sync 00:26:09.357 04:09:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:09.357 04:09:44 -- nvmf/common.sh@119 -- # set +e 00:26:09.357 04:09:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:09.357 04:09:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:09.358 rmmod nvme_tcp 00:26:09.358 rmmod nvme_fabrics 00:26:09.358 rmmod nvme_keyring 00:26:09.358 04:09:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:09.358 04:09:44 -- nvmf/common.sh@123 -- # set -e 00:26:09.358 04:09:44 -- nvmf/common.sh@124 -- # return 0 00:26:09.358 04:09:44 -- nvmf/common.sh@477 -- # '[' -n 91195 ']' 00:26:09.358 04:09:44 -- nvmf/common.sh@478 -- # killprocess 91195 00:26:09.358 04:09:44 -- common/autotest_common.sh@936 -- # '[' -z 91195 ']' 00:26:09.358 04:09:44 -- common/autotest_common.sh@940 -- # kill -0 91195 00:26:09.358 04:09:44 -- common/autotest_common.sh@941 -- # uname 00:26:09.358 04:09:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:09.358 04:09:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91195 00:26:09.358 killing process with pid 91195 00:26:09.358 04:09:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:09.358 04:09:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:09.358 04:09:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91195' 00:26:09.358 04:09:44 -- common/autotest_common.sh@955 -- # kill 91195 00:26:09.358 [2024-11-08 04:09:44.422365] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:09.358 04:09:44 -- common/autotest_common.sh@960 -- # wait 91195 00:26:09.925 04:09:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:09.925 04:09:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:09.925 04:09:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:09.925 04:09:44 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:09.925 04:09:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:09.925 04:09:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.925 04:09:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:09.925 04:09:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.925 04:09:44 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:09.925 ************************************ 00:26:09.925 END TEST nvmf_identify_passthru 00:26:09.925 ************************************ 00:26:09.925 00:26:09.925 real 0m2.580s 00:26:09.925 user 0m5.005s 00:26:09.925 sys 0m0.846s 00:26:09.925 04:09:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:09.925 04:09:44 -- common/autotest_common.sh@10 -- # set +x 00:26:09.925 04:09:44 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:26:09.925 04:09:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:09.925 04:09:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:09.925 04:09:44 -- common/autotest_common.sh@10 -- # set +x 00:26:09.925 ************************************ 00:26:09.925 START TEST nvmf_dif 00:26:09.925 ************************************ 00:26:09.925 04:09:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:26:09.925 * Looking for test storage... 00:26:09.925 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:09.926 04:09:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:09.926 04:09:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:09.926 04:09:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:09.926 04:09:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:09.926 04:09:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:09.926 04:09:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:09.926 04:09:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:09.926 04:09:45 -- scripts/common.sh@335 -- # IFS=.-: 00:26:09.926 04:09:45 -- scripts/common.sh@335 -- # read -ra ver1 00:26:09.926 04:09:45 -- scripts/common.sh@336 -- # IFS=.-: 00:26:09.926 04:09:45 -- scripts/common.sh@336 -- # read -ra ver2 00:26:09.926 04:09:45 -- scripts/common.sh@337 -- # local 'op=<' 00:26:09.926 04:09:45 -- scripts/common.sh@339 -- # ver1_l=2 00:26:09.926 04:09:45 -- scripts/common.sh@340 -- # ver2_l=1 00:26:09.926 04:09:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:09.926 04:09:45 -- scripts/common.sh@343 -- # case "$op" in 00:26:09.926 04:09:45 -- scripts/common.sh@344 -- # : 1 00:26:09.926 04:09:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:09.926 04:09:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:09.926 04:09:45 -- scripts/common.sh@364 -- # decimal 1 00:26:09.926 04:09:45 -- scripts/common.sh@352 -- # local d=1 00:26:09.926 04:09:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:09.926 04:09:45 -- scripts/common.sh@354 -- # echo 1 00:26:09.926 04:09:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:09.926 04:09:45 -- scripts/common.sh@365 -- # decimal 2 00:26:09.926 04:09:45 -- scripts/common.sh@352 -- # local d=2 00:26:09.926 04:09:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:09.926 04:09:45 -- scripts/common.sh@354 -- # echo 2 00:26:09.926 04:09:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:09.926 04:09:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:09.926 04:09:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:09.926 04:09:45 -- scripts/common.sh@367 -- # return 0 00:26:09.926 04:09:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:09.926 04:09:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:09.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.926 --rc genhtml_branch_coverage=1 00:26:09.926 --rc genhtml_function_coverage=1 00:26:09.926 --rc genhtml_legend=1 00:26:09.926 --rc geninfo_all_blocks=1 00:26:09.926 --rc geninfo_unexecuted_blocks=1 00:26:09.926 00:26:09.926 ' 00:26:09.926 04:09:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:09.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.926 --rc genhtml_branch_coverage=1 00:26:09.926 --rc genhtml_function_coverage=1 00:26:09.926 --rc genhtml_legend=1 00:26:09.926 --rc geninfo_all_blocks=1 00:26:09.926 --rc geninfo_unexecuted_blocks=1 00:26:09.926 00:26:09.926 ' 00:26:09.926 04:09:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:09.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.926 --rc genhtml_branch_coverage=1 00:26:09.926 --rc genhtml_function_coverage=1 00:26:09.926 --rc genhtml_legend=1 00:26:09.926 --rc geninfo_all_blocks=1 00:26:09.926 --rc geninfo_unexecuted_blocks=1 00:26:09.926 00:26:09.926 ' 00:26:09.926 04:09:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:09.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.926 --rc genhtml_branch_coverage=1 00:26:09.926 --rc genhtml_function_coverage=1 00:26:09.926 --rc genhtml_legend=1 00:26:09.926 --rc geninfo_all_blocks=1 00:26:09.926 --rc geninfo_unexecuted_blocks=1 00:26:09.926 00:26:09.926 ' 00:26:09.926 04:09:45 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:09.926 04:09:45 -- nvmf/common.sh@7 -- # uname -s 00:26:09.926 04:09:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:09.926 04:09:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:09.926 04:09:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:09.926 04:09:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:09.926 04:09:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:09.926 04:09:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:09.926 04:09:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:09.926 04:09:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:09.926 04:09:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:09.926 04:09:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:10.186 04:09:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:26:10.186 04:09:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:26:10.186 04:09:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:10.186 04:09:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:10.186 04:09:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:10.186 04:09:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:10.186 04:09:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:10.186 04:09:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:10.186 04:09:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:10.186 04:09:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.186 04:09:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.186 04:09:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.186 04:09:45 -- paths/export.sh@5 -- # export PATH 00:26:10.186 04:09:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.186 04:09:45 -- nvmf/common.sh@46 -- # : 0 00:26:10.186 04:09:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:10.186 04:09:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:10.186 04:09:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:10.186 04:09:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:10.186 04:09:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:10.186 04:09:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:10.186 04:09:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:10.186 04:09:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:10.186 04:09:45 -- target/dif.sh@15 -- # NULL_META=16 00:26:10.186 04:09:45 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:26:10.186 04:09:45 -- target/dif.sh@15 -- # NULL_SIZE=64 00:26:10.186 04:09:45 -- target/dif.sh@15 -- # NULL_DIF=1 00:26:10.186 04:09:45 -- target/dif.sh@135 -- # nvmftestinit 00:26:10.186 04:09:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:10.186 04:09:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:10.186 04:09:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:10.186 04:09:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:10.186 04:09:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:10.186 04:09:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:10.186 04:09:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:10.186 04:09:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:10.186 04:09:45 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:10.186 04:09:45 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:10.186 04:09:45 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:10.186 04:09:45 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:10.186 04:09:45 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:10.186 04:09:45 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:10.186 04:09:45 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:10.186 04:09:45 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:10.186 04:09:45 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:10.186 04:09:45 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:10.186 04:09:45 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:10.186 04:09:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:10.186 04:09:45 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:10.186 04:09:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:10.186 04:09:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:10.186 04:09:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:10.186 04:09:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:10.186 04:09:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:10.186 04:09:45 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:10.186 04:09:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:10.186 Cannot find device "nvmf_tgt_br" 00:26:10.186 04:09:45 -- nvmf/common.sh@154 -- # true 00:26:10.186 04:09:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:10.186 Cannot find device "nvmf_tgt_br2" 00:26:10.186 04:09:45 -- nvmf/common.sh@155 -- # true 00:26:10.186 04:09:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:10.186 04:09:45 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:10.186 Cannot find device "nvmf_tgt_br" 00:26:10.186 04:09:45 -- nvmf/common.sh@157 -- # true 00:26:10.186 04:09:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:10.186 Cannot find device "nvmf_tgt_br2" 00:26:10.186 04:09:45 -- nvmf/common.sh@158 -- # true 00:26:10.186 04:09:45 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:10.186 04:09:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:10.186 04:09:45 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:10.186 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:10.186 04:09:45 -- nvmf/common.sh@161 -- # true 00:26:10.186 04:09:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:10.186 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:10.186 04:09:45 -- nvmf/common.sh@162 -- # true 00:26:10.186 04:09:45 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:10.186 04:09:45 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:10.186 04:09:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:10.186 04:09:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:10.186 04:09:45 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:10.186 04:09:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:10.186 04:09:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:10.186 04:09:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:10.186 04:09:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:10.446 04:09:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:10.446 04:09:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:10.446 04:09:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:10.446 04:09:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:10.446 04:09:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:10.446 04:09:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:10.446 04:09:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:10.446 04:09:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:10.446 04:09:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:10.446 04:09:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:10.446 04:09:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:10.446 04:09:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:10.446 04:09:45 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:10.446 04:09:45 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:10.446 04:09:45 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:10.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:10.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:26:10.446 00:26:10.446 --- 10.0.0.2 ping statistics --- 00:26:10.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:10.446 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:26:10.446 04:09:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:10.446 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:10.446 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:26:10.446 00:26:10.446 --- 10.0.0.3 ping statistics --- 00:26:10.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:10.446 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:26:10.446 04:09:45 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:10.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:10.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:26:10.446 00:26:10.446 --- 10.0.0.1 ping statistics --- 00:26:10.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:10.446 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:26:10.446 04:09:45 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:10.446 04:09:45 -- nvmf/common.sh@421 -- # return 0 00:26:10.446 04:09:45 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:26:10.446 04:09:45 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:10.707 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:10.707 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:10.707 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:10.992 04:09:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:10.992 04:09:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:10.992 04:09:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:10.992 04:09:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:10.992 04:09:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:10.992 04:09:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:10.992 04:09:45 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:26:10.992 04:09:45 -- target/dif.sh@137 -- # nvmfappstart 00:26:10.992 04:09:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:10.992 04:09:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:10.992 04:09:45 -- common/autotest_common.sh@10 -- # set +x 00:26:10.992 04:09:45 -- nvmf/common.sh@469 -- # nvmfpid=91534 00:26:10.992 04:09:45 -- nvmf/common.sh@470 -- # waitforlisten 91534 00:26:10.992 04:09:45 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:10.992 04:09:45 -- common/autotest_common.sh@829 -- # '[' -z 91534 ']' 00:26:10.992 04:09:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:10.992 04:09:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:10.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:10.992 04:09:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:10.992 04:09:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:10.992 04:09:45 -- common/autotest_common.sh@10 -- # set +x 00:26:10.992 [2024-11-08 04:09:45.914732] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:10.992 [2024-11-08 04:09:45.915656] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:10.992 [2024-11-08 04:09:46.064480] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.262 [2024-11-08 04:09:46.182744] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:11.262 [2024-11-08 04:09:46.182933] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:11.262 [2024-11-08 04:09:46.182951] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:11.262 [2024-11-08 04:09:46.182964] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:11.262 [2024-11-08 04:09:46.183008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.829 04:09:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:11.829 04:09:46 -- common/autotest_common.sh@862 -- # return 0 00:26:11.829 04:09:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:11.829 04:09:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:11.829 04:09:46 -- common/autotest_common.sh@10 -- # set +x 00:26:12.087 04:09:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:12.087 04:09:46 -- target/dif.sh@139 -- # create_transport 00:26:12.087 04:09:46 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:26:12.087 04:09:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.087 04:09:46 -- common/autotest_common.sh@10 -- # set +x 00:26:12.087 [2024-11-08 04:09:46.970318] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:12.087 04:09:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.087 04:09:46 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:26:12.087 04:09:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:12.087 04:09:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:12.087 04:09:46 -- common/autotest_common.sh@10 -- # set +x 00:26:12.087 ************************************ 00:26:12.087 START TEST fio_dif_1_default 00:26:12.087 ************************************ 00:26:12.087 04:09:46 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:26:12.087 04:09:46 -- target/dif.sh@86 -- # create_subsystems 0 00:26:12.087 04:09:46 -- target/dif.sh@28 -- # local sub 00:26:12.087 04:09:46 -- target/dif.sh@30 -- # for sub in "$@" 00:26:12.087 04:09:46 -- target/dif.sh@31 -- # create_subsystem 0 00:26:12.087 04:09:46 -- target/dif.sh@18 -- # local sub_id=0 00:26:12.087 04:09:46 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:12.087 04:09:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.087 04:09:46 -- common/autotest_common.sh@10 -- # set +x 00:26:12.087 bdev_null0 00:26:12.087 04:09:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.087 04:09:46 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:12.088 04:09:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.088 04:09:46 -- common/autotest_common.sh@10 -- # set +x 00:26:12.088 04:09:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.088 04:09:47 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:12.088 04:09:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.088 04:09:47 -- common/autotest_common.sh@10 -- # set +x 00:26:12.088 04:09:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.088 04:09:47 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:12.088 04:09:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.088 04:09:47 -- common/autotest_common.sh@10 -- # set +x 00:26:12.088 [2024-11-08 04:09:47.018455] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:12.088 04:09:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.088 04:09:47 -- target/dif.sh@87 -- # fio /dev/fd/62 00:26:12.088 04:09:47 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:26:12.088 04:09:47 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:12.088 04:09:47 -- nvmf/common.sh@520 -- # config=() 00:26:12.088 04:09:47 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:12.088 04:09:47 -- nvmf/common.sh@520 -- # local subsystem config 00:26:12.088 04:09:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:12.088 04:09:47 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:12.088 04:09:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:12.088 { 00:26:12.088 "params": { 00:26:12.088 "name": "Nvme$subsystem", 00:26:12.088 "trtype": "$TEST_TRANSPORT", 00:26:12.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:12.088 "adrfam": "ipv4", 00:26:12.088 "trsvcid": "$NVMF_PORT", 00:26:12.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:12.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:12.088 "hdgst": ${hdgst:-false}, 00:26:12.088 "ddgst": ${ddgst:-false} 00:26:12.088 }, 00:26:12.088 "method": "bdev_nvme_attach_controller" 00:26:12.088 } 00:26:12.088 EOF 00:26:12.088 )") 00:26:12.088 04:09:47 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:12.088 04:09:47 -- target/dif.sh@82 -- # gen_fio_conf 00:26:12.088 04:09:47 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:12.088 04:09:47 -- target/dif.sh@54 -- # local file 00:26:12.088 04:09:47 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:12.088 04:09:47 -- target/dif.sh@56 -- # cat 00:26:12.088 04:09:47 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:12.088 04:09:47 -- common/autotest_common.sh@1330 -- # shift 00:26:12.088 04:09:47 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:12.088 04:09:47 -- nvmf/common.sh@542 -- # cat 00:26:12.088 04:09:47 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:12.088 04:09:47 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:12.088 04:09:47 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:12.088 04:09:47 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:12.088 04:09:47 -- target/dif.sh@72 -- # (( file <= files )) 00:26:12.088 04:09:47 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:12.088 04:09:47 -- nvmf/common.sh@544 -- # jq . 00:26:12.088 04:09:47 -- nvmf/common.sh@545 -- # IFS=, 00:26:12.088 04:09:47 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:12.088 "params": { 00:26:12.088 "name": "Nvme0", 00:26:12.088 "trtype": "tcp", 00:26:12.088 "traddr": "10.0.0.2", 00:26:12.088 "adrfam": "ipv4", 00:26:12.088 "trsvcid": "4420", 00:26:12.088 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:12.088 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:12.088 "hdgst": false, 00:26:12.088 "ddgst": false 00:26:12.088 }, 00:26:12.088 "method": "bdev_nvme_attach_controller" 00:26:12.088 }' 00:26:12.088 04:09:47 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:12.088 04:09:47 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:12.088 04:09:47 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:12.088 04:09:47 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:12.088 04:09:47 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:12.088 04:09:47 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:12.088 04:09:47 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:12.088 04:09:47 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:12.088 04:09:47 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:12.088 04:09:47 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:12.347 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:12.347 fio-3.35 00:26:12.347 Starting 1 thread 00:26:12.606 [2024-11-08 04:09:47.672309] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:12.606 [2024-11-08 04:09:47.672685] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:24.813 00:26:24.813 filename0: (groupid=0, jobs=1): err= 0: pid=91619: Fri Nov 8 04:09:57 2024 00:26:24.813 read: IOPS=7126, BW=27.8MiB/s (29.2MB/s)(278MiB/10001msec) 00:26:24.813 slat (nsec): min=5772, max=79653, avg=6872.43, stdev=2094.70 00:26:24.813 clat (usec): min=349, max=41673, avg=540.54, stdev=2585.13 00:26:24.813 lat (usec): min=354, max=41681, avg=547.41, stdev=2585.23 00:26:24.813 clat percentiles (usec): 00:26:24.813 | 1.00th=[ 355], 5.00th=[ 355], 10.00th=[ 359], 20.00th=[ 363], 00:26:24.813 | 30.00th=[ 367], 40.00th=[ 371], 50.00th=[ 375], 60.00th=[ 375], 00:26:24.813 | 70.00th=[ 379], 80.00th=[ 388], 90.00th=[ 396], 95.00th=[ 408], 00:26:24.813 | 99.00th=[ 437], 99.50th=[ 490], 99.90th=[41157], 99.95th=[41157], 00:26:24.813 | 99.99th=[41157] 00:26:24.813 bw ( KiB/s): min=17376, max=40608, per=100.00%, avg=28775.95, stdev=7076.79, samples=19 00:26:24.813 iops : min= 4344, max=10152, avg=7193.95, stdev=1769.25, samples=19 00:26:24.813 lat (usec) : 500=99.52%, 750=0.04%, 1000=0.02% 00:26:24.813 lat (msec) : 2=0.01%, 50=0.40% 00:26:24.813 cpu : usr=86.47%, sys=10.74%, ctx=24, majf=0, minf=9 00:26:24.813 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:24.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.813 issued rwts: total=71272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.813 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:24.813 00:26:24.813 Run status group 0 (all jobs): 00:26:24.813 READ: bw=27.8MiB/s (29.2MB/s), 27.8MiB/s-27.8MiB/s (29.2MB/s-29.2MB/s), io=278MiB (292MB), run=10001-10001msec 00:26:24.813 04:09:58 -- target/dif.sh@88 -- # destroy_subsystems 0 00:26:24.813 04:09:58 -- target/dif.sh@43 -- # local sub 00:26:24.813 04:09:58 -- target/dif.sh@45 -- # for sub in "$@" 00:26:24.813 04:09:58 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:24.813 04:09:58 -- target/dif.sh@36 -- # local sub_id=0 00:26:24.813 04:09:58 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:24.813 04:09:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.813 04:09:58 -- common/autotest_common.sh@10 -- # set +x 00:26:24.813 04:09:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.813 04:09:58 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:24.813 04:09:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.813 04:09:58 -- common/autotest_common.sh@10 -- # set +x 00:26:24.813 ************************************ 00:26:24.813 END TEST fio_dif_1_default 00:26:24.813 ************************************ 00:26:24.813 04:09:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.813 00:26:24.813 real 0m11.041s 00:26:24.813 user 0m9.332s 00:26:24.813 sys 0m1.325s 00:26:24.813 04:09:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:24.813 04:09:58 -- common/autotest_common.sh@10 -- # set +x 00:26:24.813 04:09:58 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:26:24.813 04:09:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:24.813 04:09:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:24.813 04:09:58 -- common/autotest_common.sh@10 -- # set +x 00:26:24.813 ************************************ 00:26:24.813 START TEST fio_dif_1_multi_subsystems 00:26:24.813 ************************************ 00:26:24.813 04:09:58 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:26:24.813 04:09:58 -- target/dif.sh@92 -- # local files=1 00:26:24.813 04:09:58 -- target/dif.sh@94 -- # create_subsystems 0 1 00:26:24.813 04:09:58 -- target/dif.sh@28 -- # local sub 00:26:24.813 04:09:58 -- target/dif.sh@30 -- # for sub in "$@" 00:26:24.813 04:09:58 -- target/dif.sh@31 -- # create_subsystem 0 00:26:24.813 04:09:58 -- target/dif.sh@18 -- # local sub_id=0 00:26:24.813 04:09:58 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:24.813 04:09:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.813 04:09:58 -- common/autotest_common.sh@10 -- # set +x 00:26:24.813 bdev_null0 00:26:24.813 04:09:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.813 04:09:58 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:24.814 04:09:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.814 04:09:58 -- common/autotest_common.sh@10 -- # set +x 00:26:24.814 04:09:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.814 04:09:58 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:24.814 04:09:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.814 04:09:58 -- common/autotest_common.sh@10 -- # set +x 00:26:24.814 04:09:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.814 04:09:58 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:24.814 04:09:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.814 04:09:58 -- common/autotest_common.sh@10 -- # set +x 00:26:24.814 [2024-11-08 04:09:58.112309] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:24.814 04:09:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.814 04:09:58 -- target/dif.sh@30 -- # for sub in "$@" 00:26:24.814 04:09:58 -- target/dif.sh@31 -- # create_subsystem 1 00:26:24.814 04:09:58 -- target/dif.sh@18 -- # local sub_id=1 00:26:24.814 04:09:58 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:24.814 04:09:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.814 04:09:58 -- common/autotest_common.sh@10 -- # set +x 00:26:24.814 bdev_null1 00:26:24.814 04:09:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.814 04:09:58 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:24.814 04:09:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.814 04:09:58 -- common/autotest_common.sh@10 -- # set +x 00:26:24.814 04:09:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.814 04:09:58 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:24.814 04:09:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.814 04:09:58 -- common/autotest_common.sh@10 -- # set +x 00:26:24.814 04:09:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.814 04:09:58 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:24.814 04:09:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.814 04:09:58 -- common/autotest_common.sh@10 -- # set +x 00:26:24.814 04:09:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.814 04:09:58 -- target/dif.sh@95 -- # fio /dev/fd/62 00:26:24.814 04:09:58 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:26:24.814 04:09:58 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:24.814 04:09:58 -- nvmf/common.sh@520 -- # config=() 00:26:24.814 04:09:58 -- nvmf/common.sh@520 -- # local subsystem config 00:26:24.814 04:09:58 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:24.814 04:09:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:24.814 04:09:58 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:24.814 04:09:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:24.814 { 00:26:24.814 "params": { 00:26:24.814 "name": "Nvme$subsystem", 00:26:24.814 "trtype": "$TEST_TRANSPORT", 00:26:24.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:24.814 "adrfam": "ipv4", 00:26:24.814 "trsvcid": "$NVMF_PORT", 00:26:24.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:24.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:24.814 "hdgst": ${hdgst:-false}, 00:26:24.814 "ddgst": ${ddgst:-false} 00:26:24.814 }, 00:26:24.814 "method": "bdev_nvme_attach_controller" 00:26:24.814 } 00:26:24.814 EOF 00:26:24.814 )") 00:26:24.814 04:09:58 -- target/dif.sh@82 -- # gen_fio_conf 00:26:24.814 04:09:58 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:24.814 04:09:58 -- target/dif.sh@54 -- # local file 00:26:24.814 04:09:58 -- target/dif.sh@56 -- # cat 00:26:24.814 04:09:58 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:24.814 04:09:58 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:24.814 04:09:58 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:24.814 04:09:58 -- common/autotest_common.sh@1330 -- # shift 00:26:24.814 04:09:58 -- nvmf/common.sh@542 -- # cat 00:26:24.814 04:09:58 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:24.814 04:09:58 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:24.814 04:09:58 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:24.814 04:09:58 -- target/dif.sh@72 -- # (( file <= files )) 00:26:24.814 04:09:58 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:24.814 04:09:58 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:24.814 04:09:58 -- target/dif.sh@73 -- # cat 00:26:24.814 04:09:58 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:24.814 04:09:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:24.814 04:09:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:24.814 { 00:26:24.814 "params": { 00:26:24.814 "name": "Nvme$subsystem", 00:26:24.814 "trtype": "$TEST_TRANSPORT", 00:26:24.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:24.814 "adrfam": "ipv4", 00:26:24.814 "trsvcid": "$NVMF_PORT", 00:26:24.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:24.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:24.814 "hdgst": ${hdgst:-false}, 00:26:24.814 "ddgst": ${ddgst:-false} 00:26:24.814 }, 00:26:24.814 "method": "bdev_nvme_attach_controller" 00:26:24.814 } 00:26:24.814 EOF 00:26:24.814 )") 00:26:24.814 04:09:58 -- target/dif.sh@72 -- # (( file++ )) 00:26:24.814 04:09:58 -- target/dif.sh@72 -- # (( file <= files )) 00:26:24.814 04:09:58 -- nvmf/common.sh@542 -- # cat 00:26:24.814 04:09:58 -- nvmf/common.sh@544 -- # jq . 00:26:24.814 04:09:58 -- nvmf/common.sh@545 -- # IFS=, 00:26:24.814 04:09:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:24.814 "params": { 00:26:24.814 "name": "Nvme0", 00:26:24.814 "trtype": "tcp", 00:26:24.814 "traddr": "10.0.0.2", 00:26:24.814 "adrfam": "ipv4", 00:26:24.814 "trsvcid": "4420", 00:26:24.814 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:24.814 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:24.814 "hdgst": false, 00:26:24.814 "ddgst": false 00:26:24.814 }, 00:26:24.814 "method": "bdev_nvme_attach_controller" 00:26:24.814 },{ 00:26:24.814 "params": { 00:26:24.814 "name": "Nvme1", 00:26:24.814 "trtype": "tcp", 00:26:24.814 "traddr": "10.0.0.2", 00:26:24.814 "adrfam": "ipv4", 00:26:24.814 "trsvcid": "4420", 00:26:24.814 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:24.814 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:24.814 "hdgst": false, 00:26:24.814 "ddgst": false 00:26:24.814 }, 00:26:24.814 "method": "bdev_nvme_attach_controller" 00:26:24.814 }' 00:26:24.814 04:09:58 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:24.814 04:09:58 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:24.814 04:09:58 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:24.814 04:09:58 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:24.814 04:09:58 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:24.814 04:09:58 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:24.814 04:09:58 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:24.814 04:09:58 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:24.814 04:09:58 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:24.814 04:09:58 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:24.814 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:24.814 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:24.814 fio-3.35 00:26:24.814 Starting 2 threads 00:26:24.814 [2024-11-08 04:09:58.900511] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:24.814 [2024-11-08 04:09:58.900583] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:34.789 00:26:34.789 filename0: (groupid=0, jobs=1): err= 0: pid=91780: Fri Nov 8 04:10:09 2024 00:26:34.789 read: IOPS=383, BW=1533KiB/s (1569kB/s)(15.0MiB/10011msec) 00:26:34.789 slat (nsec): min=5910, max=49477, avg=7588.24, stdev=3145.52 00:26:34.789 clat (usec): min=355, max=41451, avg=10415.39, stdev=17486.06 00:26:34.789 lat (usec): min=361, max=41459, avg=10422.98, stdev=17486.27 00:26:34.789 clat percentiles (usec): 00:26:34.789 | 1.00th=[ 359], 5.00th=[ 363], 10.00th=[ 367], 20.00th=[ 375], 00:26:34.789 | 30.00th=[ 379], 40.00th=[ 388], 50.00th=[ 396], 60.00th=[ 404], 00:26:34.789 | 70.00th=[ 433], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:26:34.789 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:26:34.789 | 99.99th=[41681] 00:26:34.789 bw ( KiB/s): min= 864, max= 2656, per=55.57%, avg=1532.80, stdev=432.44, samples=20 00:26:34.789 iops : min= 216, max= 664, avg=383.20, stdev=108.11, samples=20 00:26:34.789 lat (usec) : 500=72.50%, 750=2.11%, 1000=0.47% 00:26:34.789 lat (msec) : 2=0.21%, 50=24.71% 00:26:34.789 cpu : usr=95.12%, sys=4.39%, ctx=49, majf=0, minf=0 00:26:34.789 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:34.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.789 issued rwts: total=3836,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.789 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:34.789 filename1: (groupid=0, jobs=1): err= 0: pid=91781: Fri Nov 8 04:10:09 2024 00:26:34.789 read: IOPS=306, BW=1224KiB/s (1254kB/s)(12.0MiB/10011msec) 00:26:34.789 slat (nsec): min=5887, max=48036, avg=8002.86, stdev=4034.39 00:26:34.789 clat (usec): min=356, max=41444, avg=13044.73, stdev=18787.87 00:26:34.789 lat (usec): min=362, max=41453, avg=13052.73, stdev=18788.05 00:26:34.789 clat percentiles (usec): 00:26:34.789 | 1.00th=[ 363], 5.00th=[ 367], 10.00th=[ 371], 20.00th=[ 379], 00:26:34.789 | 30.00th=[ 383], 40.00th=[ 392], 50.00th=[ 400], 60.00th=[ 416], 00:26:34.789 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:34.789 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:26:34.789 | 99.99th=[41681] 00:26:34.789 bw ( KiB/s): min= 864, max= 1856, per=44.40%, avg=1224.00, stdev=237.86, samples=20 00:26:34.789 iops : min= 216, max= 464, avg=306.00, stdev=59.46, samples=20 00:26:34.789 lat (usec) : 500=67.82%, 750=0.59%, 1000=0.26% 00:26:34.789 lat (msec) : 2=0.13%, 50=31.20% 00:26:34.789 cpu : usr=95.77%, sys=3.78%, ctx=9, majf=0, minf=0 00:26:34.789 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:34.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.789 issued rwts: total=3064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.789 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:34.789 00:26:34.789 Run status group 0 (all jobs): 00:26:34.789 READ: bw=2757KiB/s (2823kB/s), 1224KiB/s-1533KiB/s (1254kB/s-1569kB/s), io=27.0MiB (28.3MB), run=10011-10011msec 00:26:34.789 04:10:09 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:34.789 04:10:09 -- target/dif.sh@43 -- # local sub 00:26:34.789 04:10:09 -- target/dif.sh@45 -- # for sub in "$@" 00:26:34.789 04:10:09 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:34.789 04:10:09 -- target/dif.sh@36 -- # local sub_id=0 00:26:34.789 04:10:09 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:34.789 04:10:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.789 04:10:09 -- common/autotest_common.sh@10 -- # set +x 00:26:34.789 04:10:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.789 04:10:09 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:34.789 04:10:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.789 04:10:09 -- common/autotest_common.sh@10 -- # set +x 00:26:34.789 04:10:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.789 04:10:09 -- target/dif.sh@45 -- # for sub in "$@" 00:26:34.789 04:10:09 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:34.789 04:10:09 -- target/dif.sh@36 -- # local sub_id=1 00:26:34.789 04:10:09 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:34.789 04:10:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.789 04:10:09 -- common/autotest_common.sh@10 -- # set +x 00:26:34.789 04:10:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.789 04:10:09 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:34.789 04:10:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.789 04:10:09 -- common/autotest_common.sh@10 -- # set +x 00:26:34.789 ************************************ 00:26:34.789 END TEST fio_dif_1_multi_subsystems 00:26:34.789 ************************************ 00:26:34.789 04:10:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.789 00:26:34.789 real 0m11.199s 00:26:34.789 user 0m19.923s 00:26:34.789 sys 0m1.106s 00:26:34.789 04:10:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:34.789 04:10:09 -- common/autotest_common.sh@10 -- # set +x 00:26:34.789 04:10:09 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:34.789 04:10:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:34.789 04:10:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:34.789 04:10:09 -- common/autotest_common.sh@10 -- # set +x 00:26:34.789 ************************************ 00:26:34.789 START TEST fio_dif_rand_params 00:26:34.789 ************************************ 00:26:34.789 04:10:09 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:26:34.789 04:10:09 -- target/dif.sh@100 -- # local NULL_DIF 00:26:34.789 04:10:09 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:34.789 04:10:09 -- target/dif.sh@103 -- # NULL_DIF=3 00:26:34.789 04:10:09 -- target/dif.sh@103 -- # bs=128k 00:26:34.789 04:10:09 -- target/dif.sh@103 -- # numjobs=3 00:26:34.789 04:10:09 -- target/dif.sh@103 -- # iodepth=3 00:26:34.789 04:10:09 -- target/dif.sh@103 -- # runtime=5 00:26:34.789 04:10:09 -- target/dif.sh@105 -- # create_subsystems 0 00:26:34.789 04:10:09 -- target/dif.sh@28 -- # local sub 00:26:34.789 04:10:09 -- target/dif.sh@30 -- # for sub in "$@" 00:26:34.789 04:10:09 -- target/dif.sh@31 -- # create_subsystem 0 00:26:34.789 04:10:09 -- target/dif.sh@18 -- # local sub_id=0 00:26:34.789 04:10:09 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:34.789 04:10:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.789 04:10:09 -- common/autotest_common.sh@10 -- # set +x 00:26:34.789 bdev_null0 00:26:34.789 04:10:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.790 04:10:09 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:34.790 04:10:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.790 04:10:09 -- common/autotest_common.sh@10 -- # set +x 00:26:34.790 04:10:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.790 04:10:09 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:34.790 04:10:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.790 04:10:09 -- common/autotest_common.sh@10 -- # set +x 00:26:34.790 04:10:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.790 04:10:09 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:34.790 04:10:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.790 04:10:09 -- common/autotest_common.sh@10 -- # set +x 00:26:34.790 [2024-11-08 04:10:09.368284] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:34.790 04:10:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.790 04:10:09 -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:34.790 04:10:09 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:34.790 04:10:09 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:34.790 04:10:09 -- nvmf/common.sh@520 -- # config=() 00:26:34.790 04:10:09 -- nvmf/common.sh@520 -- # local subsystem config 00:26:34.790 04:10:09 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:34.790 04:10:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:34.790 04:10:09 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:34.790 04:10:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:34.790 { 00:26:34.790 "params": { 00:26:34.790 "name": "Nvme$subsystem", 00:26:34.790 "trtype": "$TEST_TRANSPORT", 00:26:34.790 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:34.790 "adrfam": "ipv4", 00:26:34.790 "trsvcid": "$NVMF_PORT", 00:26:34.790 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:34.790 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:34.790 "hdgst": ${hdgst:-false}, 00:26:34.790 "ddgst": ${ddgst:-false} 00:26:34.790 }, 00:26:34.790 "method": "bdev_nvme_attach_controller" 00:26:34.790 } 00:26:34.790 EOF 00:26:34.790 )") 00:26:34.790 04:10:09 -- target/dif.sh@82 -- # gen_fio_conf 00:26:34.790 04:10:09 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:34.790 04:10:09 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:34.790 04:10:09 -- target/dif.sh@54 -- # local file 00:26:34.790 04:10:09 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:34.790 04:10:09 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:34.790 04:10:09 -- target/dif.sh@56 -- # cat 00:26:34.790 04:10:09 -- common/autotest_common.sh@1330 -- # shift 00:26:34.790 04:10:09 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:34.790 04:10:09 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:34.790 04:10:09 -- nvmf/common.sh@542 -- # cat 00:26:34.790 04:10:09 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:34.790 04:10:09 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:34.790 04:10:09 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:34.790 04:10:09 -- target/dif.sh@72 -- # (( file <= files )) 00:26:34.790 04:10:09 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:34.790 04:10:09 -- nvmf/common.sh@544 -- # jq . 00:26:34.790 04:10:09 -- nvmf/common.sh@545 -- # IFS=, 00:26:34.790 04:10:09 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:34.790 "params": { 00:26:34.790 "name": "Nvme0", 00:26:34.790 "trtype": "tcp", 00:26:34.790 "traddr": "10.0.0.2", 00:26:34.790 "adrfam": "ipv4", 00:26:34.790 "trsvcid": "4420", 00:26:34.790 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:34.790 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:34.790 "hdgst": false, 00:26:34.790 "ddgst": false 00:26:34.790 }, 00:26:34.790 "method": "bdev_nvme_attach_controller" 00:26:34.790 }' 00:26:34.790 04:10:09 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:34.790 04:10:09 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:34.790 04:10:09 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:34.790 04:10:09 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:34.790 04:10:09 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:34.790 04:10:09 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:34.790 04:10:09 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:34.790 04:10:09 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:34.790 04:10:09 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:34.790 04:10:09 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:34.790 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:34.790 ... 00:26:34.790 fio-3.35 00:26:34.790 Starting 3 threads 00:26:35.049 [2024-11-08 04:10:10.012147] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:35.049 [2024-11-08 04:10:10.012220] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:40.319 00:26:40.319 filename0: (groupid=0, jobs=1): err= 0: pid=91937: Fri Nov 8 04:10:15 2024 00:26:40.319 read: IOPS=260, BW=32.6MiB/s (34.1MB/s)(164MiB/5032msec) 00:26:40.319 slat (usec): min=5, max=210, avg=11.96, stdev= 7.69 00:26:40.319 clat (usec): min=4525, max=52351, avg=11496.93, stdev=10166.80 00:26:40.319 lat (usec): min=4536, max=52362, avg=11508.89, stdev=10166.69 00:26:40.319 clat percentiles (usec): 00:26:40.319 | 1.00th=[ 5211], 5.00th=[ 5866], 10.00th=[ 6325], 20.00th=[ 6718], 00:26:40.320 | 30.00th=[ 8029], 40.00th=[ 9110], 50.00th=[ 9634], 60.00th=[ 9896], 00:26:40.320 | 70.00th=[10159], 80.00th=[10552], 90.00th=[11207], 95.00th=[47449], 00:26:40.320 | 99.00th=[51119], 99.50th=[51643], 99.90th=[52167], 99.95th=[52167], 00:26:40.320 | 99.99th=[52167] 00:26:40.320 bw ( KiB/s): min=24320, max=40448, per=28.99%, avg=32369.78, stdev=6712.51, samples=9 00:26:40.320 iops : min= 190, max= 316, avg=252.89, stdev=52.44, samples=9 00:26:40.320 lat (msec) : 10=63.69%, 20=29.67%, 50=4.27%, 100=2.36% 00:26:40.320 cpu : usr=93.50%, sys=4.55%, ctx=80, majf=0, minf=0 00:26:40.320 IO depths : 1=4.0%, 2=96.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:40.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.320 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.320 issued rwts: total=1311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.320 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:40.320 filename0: (groupid=0, jobs=1): err= 0: pid=91938: Fri Nov 8 04:10:15 2024 00:26:40.320 read: IOPS=350, BW=43.8MiB/s (45.9MB/s)(219MiB/5002msec) 00:26:40.320 slat (nsec): min=6170, max=74209, avg=11113.24, stdev=6777.68 00:26:40.320 clat (usec): min=3282, max=13964, avg=8536.23, stdev=3352.42 00:26:40.320 lat (usec): min=3289, max=13970, avg=8547.34, stdev=3352.60 00:26:40.320 clat percentiles (usec): 00:26:40.320 | 1.00th=[ 3392], 5.00th=[ 3425], 10.00th=[ 3458], 20.00th=[ 4228], 00:26:40.320 | 30.00th=[ 7046], 40.00th=[ 7439], 50.00th=[ 7898], 60.00th=[ 9896], 00:26:40.320 | 70.00th=[11731], 80.00th=[12125], 90.00th=[12649], 95.00th=[13042], 00:26:40.320 | 99.00th=[13566], 99.50th=[13566], 99.90th=[13698], 99.95th=[13960], 00:26:40.320 | 99.99th=[13960] 00:26:40.320 bw ( KiB/s): min=31488, max=57600, per=39.89%, avg=44544.00, stdev=10282.31, samples=9 00:26:40.320 iops : min= 246, max= 450, avg=348.00, stdev=80.33, samples=9 00:26:40.320 lat (msec) : 4=19.35%, 10=41.04%, 20=39.61% 00:26:40.320 cpu : usr=91.84%, sys=6.02%, ctx=8, majf=0, minf=0 00:26:40.320 IO depths : 1=33.2%, 2=66.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:40.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.320 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.320 issued rwts: total=1752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.320 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:40.320 filename0: (groupid=0, jobs=1): err= 0: pid=91939: Fri Nov 8 04:10:15 2024 00:26:40.320 read: IOPS=263, BW=33.0MiB/s (34.6MB/s)(166MiB/5035msec) 00:26:40.320 slat (nsec): min=5988, max=49799, avg=11330.07, stdev=4739.78 00:26:40.320 clat (usec): min=4968, max=52080, avg=11348.80, stdev=11062.46 00:26:40.320 lat (usec): min=4987, max=52110, avg=11360.13, stdev=11062.59 00:26:40.320 clat percentiles (usec): 00:26:40.320 | 1.00th=[ 5342], 5.00th=[ 6063], 10.00th=[ 6456], 20.00th=[ 7046], 00:26:40.320 | 30.00th=[ 7832], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8717], 00:26:40.320 | 70.00th=[ 8979], 80.00th=[ 9372], 90.00th=[10028], 95.00th=[48497], 00:26:40.320 | 99.00th=[50594], 99.50th=[51119], 99.90th=[52167], 99.95th=[52167], 00:26:40.320 | 99.99th=[52167] 00:26:40.320 bw ( KiB/s): min=19968, max=44544, per=30.40%, avg=33945.60, stdev=7720.85, samples=10 00:26:40.320 iops : min= 156, max= 348, avg=265.20, stdev=60.32, samples=10 00:26:40.320 lat (msec) : 10=89.92%, 20=2.18%, 50=6.32%, 100=1.58% 00:26:40.320 cpu : usr=94.10%, sys=4.41%, ctx=9, majf=0, minf=0 00:26:40.320 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:40.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.320 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.320 issued rwts: total=1329,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.320 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:40.320 00:26:40.320 Run status group 0 (all jobs): 00:26:40.320 READ: bw=109MiB/s (114MB/s), 32.6MiB/s-43.8MiB/s (34.1MB/s-45.9MB/s), io=549MiB (576MB), run=5002-5035msec 00:26:40.320 04:10:15 -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:40.320 04:10:15 -- target/dif.sh@43 -- # local sub 00:26:40.320 04:10:15 -- target/dif.sh@45 -- # for sub in "$@" 00:26:40.320 04:10:15 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:40.320 04:10:15 -- target/dif.sh@36 -- # local sub_id=0 00:26:40.320 04:10:15 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:40.320 04:10:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.320 04:10:15 -- common/autotest_common.sh@10 -- # set +x 00:26:40.320 04:10:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.320 04:10:15 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:40.320 04:10:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.320 04:10:15 -- common/autotest_common.sh@10 -- # set +x 00:26:40.320 04:10:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.320 04:10:15 -- target/dif.sh@109 -- # NULL_DIF=2 00:26:40.320 04:10:15 -- target/dif.sh@109 -- # bs=4k 00:26:40.320 04:10:15 -- target/dif.sh@109 -- # numjobs=8 00:26:40.320 04:10:15 -- target/dif.sh@109 -- # iodepth=16 00:26:40.320 04:10:15 -- target/dif.sh@109 -- # runtime= 00:26:40.320 04:10:15 -- target/dif.sh@109 -- # files=2 00:26:40.320 04:10:15 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:40.320 04:10:15 -- target/dif.sh@28 -- # local sub 00:26:40.320 04:10:15 -- target/dif.sh@30 -- # for sub in "$@" 00:26:40.320 04:10:15 -- target/dif.sh@31 -- # create_subsystem 0 00:26:40.320 04:10:15 -- target/dif.sh@18 -- # local sub_id=0 00:26:40.320 04:10:15 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:40.320 04:10:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.320 04:10:15 -- common/autotest_common.sh@10 -- # set +x 00:26:40.580 bdev_null0 00:26:40.580 04:10:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.580 04:10:15 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:40.580 04:10:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.580 04:10:15 -- common/autotest_common.sh@10 -- # set +x 00:26:40.580 04:10:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.580 04:10:15 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:40.580 04:10:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.580 04:10:15 -- common/autotest_common.sh@10 -- # set +x 00:26:40.580 04:10:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.580 04:10:15 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:40.580 04:10:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.581 04:10:15 -- common/autotest_common.sh@10 -- # set +x 00:26:40.581 [2024-11-08 04:10:15.456172] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:40.581 04:10:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.581 04:10:15 -- target/dif.sh@30 -- # for sub in "$@" 00:26:40.581 04:10:15 -- target/dif.sh@31 -- # create_subsystem 1 00:26:40.581 04:10:15 -- target/dif.sh@18 -- # local sub_id=1 00:26:40.581 04:10:15 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:40.581 04:10:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.581 04:10:15 -- common/autotest_common.sh@10 -- # set +x 00:26:40.581 bdev_null1 00:26:40.581 04:10:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.581 04:10:15 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:40.581 04:10:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.581 04:10:15 -- common/autotest_common.sh@10 -- # set +x 00:26:40.581 04:10:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.581 04:10:15 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:40.581 04:10:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.581 04:10:15 -- common/autotest_common.sh@10 -- # set +x 00:26:40.581 04:10:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.581 04:10:15 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:40.581 04:10:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.581 04:10:15 -- common/autotest_common.sh@10 -- # set +x 00:26:40.581 04:10:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.581 04:10:15 -- target/dif.sh@30 -- # for sub in "$@" 00:26:40.581 04:10:15 -- target/dif.sh@31 -- # create_subsystem 2 00:26:40.581 04:10:15 -- target/dif.sh@18 -- # local sub_id=2 00:26:40.581 04:10:15 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:40.581 04:10:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.581 04:10:15 -- common/autotest_common.sh@10 -- # set +x 00:26:40.581 bdev_null2 00:26:40.581 04:10:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.581 04:10:15 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:40.581 04:10:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.581 04:10:15 -- common/autotest_common.sh@10 -- # set +x 00:26:40.581 04:10:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.581 04:10:15 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:40.581 04:10:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.581 04:10:15 -- common/autotest_common.sh@10 -- # set +x 00:26:40.581 04:10:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.581 04:10:15 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:40.581 04:10:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.581 04:10:15 -- common/autotest_common.sh@10 -- # set +x 00:26:40.581 04:10:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.581 04:10:15 -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:40.581 04:10:15 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:40.581 04:10:15 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:40.581 04:10:15 -- nvmf/common.sh@520 -- # config=() 00:26:40.581 04:10:15 -- nvmf/common.sh@520 -- # local subsystem config 00:26:40.581 04:10:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:40.581 04:10:15 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:40.581 04:10:15 -- target/dif.sh@82 -- # gen_fio_conf 00:26:40.581 04:10:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:40.581 { 00:26:40.581 "params": { 00:26:40.581 "name": "Nvme$subsystem", 00:26:40.581 "trtype": "$TEST_TRANSPORT", 00:26:40.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:40.581 "adrfam": "ipv4", 00:26:40.581 "trsvcid": "$NVMF_PORT", 00:26:40.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:40.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:40.581 "hdgst": ${hdgst:-false}, 00:26:40.581 "ddgst": ${ddgst:-false} 00:26:40.581 }, 00:26:40.581 "method": "bdev_nvme_attach_controller" 00:26:40.581 } 00:26:40.581 EOF 00:26:40.581 )") 00:26:40.581 04:10:15 -- target/dif.sh@54 -- # local file 00:26:40.581 04:10:15 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:40.581 04:10:15 -- target/dif.sh@56 -- # cat 00:26:40.581 04:10:15 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:40.581 04:10:15 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:40.581 04:10:15 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:40.581 04:10:15 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:40.581 04:10:15 -- common/autotest_common.sh@1330 -- # shift 00:26:40.581 04:10:15 -- nvmf/common.sh@542 -- # cat 00:26:40.581 04:10:15 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:40.581 04:10:15 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:40.581 04:10:15 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:40.581 04:10:15 -- target/dif.sh@72 -- # (( file <= files )) 00:26:40.581 04:10:15 -- target/dif.sh@73 -- # cat 00:26:40.581 04:10:15 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:40.581 04:10:15 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:40.581 04:10:15 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:40.581 04:10:15 -- target/dif.sh@72 -- # (( file++ )) 00:26:40.581 04:10:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:40.581 04:10:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:40.581 { 00:26:40.581 "params": { 00:26:40.581 "name": "Nvme$subsystem", 00:26:40.581 "trtype": "$TEST_TRANSPORT", 00:26:40.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:40.581 "adrfam": "ipv4", 00:26:40.581 "trsvcid": "$NVMF_PORT", 00:26:40.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:40.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:40.581 "hdgst": ${hdgst:-false}, 00:26:40.581 "ddgst": ${ddgst:-false} 00:26:40.581 }, 00:26:40.581 "method": "bdev_nvme_attach_controller" 00:26:40.581 } 00:26:40.581 EOF 00:26:40.581 )") 00:26:40.581 04:10:15 -- target/dif.sh@72 -- # (( file <= files )) 00:26:40.581 04:10:15 -- target/dif.sh@73 -- # cat 00:26:40.581 04:10:15 -- nvmf/common.sh@542 -- # cat 00:26:40.581 04:10:15 -- target/dif.sh@72 -- # (( file++ )) 00:26:40.581 04:10:15 -- target/dif.sh@72 -- # (( file <= files )) 00:26:40.581 04:10:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:40.581 04:10:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:40.581 { 00:26:40.581 "params": { 00:26:40.581 "name": "Nvme$subsystem", 00:26:40.581 "trtype": "$TEST_TRANSPORT", 00:26:40.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:40.581 "adrfam": "ipv4", 00:26:40.581 "trsvcid": "$NVMF_PORT", 00:26:40.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:40.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:40.581 "hdgst": ${hdgst:-false}, 00:26:40.581 "ddgst": ${ddgst:-false} 00:26:40.581 }, 00:26:40.581 "method": "bdev_nvme_attach_controller" 00:26:40.581 } 00:26:40.581 EOF 00:26:40.581 )") 00:26:40.581 04:10:15 -- nvmf/common.sh@542 -- # cat 00:26:40.581 04:10:15 -- nvmf/common.sh@544 -- # jq . 00:26:40.581 04:10:15 -- nvmf/common.sh@545 -- # IFS=, 00:26:40.581 04:10:15 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:40.581 "params": { 00:26:40.581 "name": "Nvme0", 00:26:40.581 "trtype": "tcp", 00:26:40.581 "traddr": "10.0.0.2", 00:26:40.581 "adrfam": "ipv4", 00:26:40.581 "trsvcid": "4420", 00:26:40.581 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:40.581 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:40.581 "hdgst": false, 00:26:40.581 "ddgst": false 00:26:40.581 }, 00:26:40.581 "method": "bdev_nvme_attach_controller" 00:26:40.581 },{ 00:26:40.581 "params": { 00:26:40.581 "name": "Nvme1", 00:26:40.581 "trtype": "tcp", 00:26:40.581 "traddr": "10.0.0.2", 00:26:40.581 "adrfam": "ipv4", 00:26:40.581 "trsvcid": "4420", 00:26:40.581 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:40.581 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:40.581 "hdgst": false, 00:26:40.581 "ddgst": false 00:26:40.581 }, 00:26:40.581 "method": "bdev_nvme_attach_controller" 00:26:40.581 },{ 00:26:40.581 "params": { 00:26:40.581 "name": "Nvme2", 00:26:40.581 "trtype": "tcp", 00:26:40.581 "traddr": "10.0.0.2", 00:26:40.581 "adrfam": "ipv4", 00:26:40.581 "trsvcid": "4420", 00:26:40.581 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:40.581 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:40.581 "hdgst": false, 00:26:40.581 "ddgst": false 00:26:40.581 }, 00:26:40.581 "method": "bdev_nvme_attach_controller" 00:26:40.581 }' 00:26:40.581 04:10:15 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:40.581 04:10:15 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:40.581 04:10:15 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:40.581 04:10:15 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:40.581 04:10:15 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:40.581 04:10:15 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:40.581 04:10:15 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:40.581 04:10:15 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:40.581 04:10:15 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:40.581 04:10:15 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:40.840 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:40.840 ... 00:26:40.840 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:40.840 ... 00:26:40.840 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:40.840 ... 00:26:40.840 fio-3.35 00:26:40.840 Starting 24 threads 00:26:41.407 [2024-11-08 04:10:16.382491] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:41.407 [2024-11-08 04:10:16.382542] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:53.658 00:26:53.658 filename0: (groupid=0, jobs=1): err= 0: pid=92038: Fri Nov 8 04:10:26 2024 00:26:53.658 read: IOPS=320, BW=1282KiB/s (1312kB/s)(12.6MiB/10041msec) 00:26:53.658 slat (usec): min=3, max=4027, avg=15.40, stdev=122.64 00:26:53.658 clat (msec): min=4, max=121, avg=49.78, stdev=16.82 00:26:53.658 lat (msec): min=4, max=121, avg=49.79, stdev=16.82 00:26:53.658 clat percentiles (msec): 00:26:53.658 | 1.00th=[ 12], 5.00th=[ 24], 10.00th=[ 31], 20.00th=[ 37], 00:26:53.658 | 30.00th=[ 41], 40.00th=[ 45], 50.00th=[ 48], 60.00th=[ 52], 00:26:53.658 | 70.00th=[ 57], 80.00th=[ 63], 90.00th=[ 71], 95.00th=[ 81], 00:26:53.658 | 99.00th=[ 99], 99.50th=[ 111], 99.90th=[ 123], 99.95th=[ 123], 00:26:53.658 | 99.99th=[ 123] 00:26:53.658 bw ( KiB/s): min= 992, max= 2400, per=4.82%, avg=1280.40, stdev=295.43, samples=20 00:26:53.658 iops : min= 248, max= 600, avg=320.10, stdev=73.86, samples=20 00:26:53.658 lat (msec) : 10=0.99%, 20=0.50%, 50=54.74%, 100=42.83%, 250=0.93% 00:26:53.658 cpu : usr=43.96%, sys=0.58%, ctx=1159, majf=0, minf=9 00:26:53.658 IO depths : 1=1.3%, 2=2.9%, 4=11.0%, 8=72.7%, 16=12.0%, 32=0.0%, >=64=0.0% 00:26:53.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.658 complete : 0=0.0%, 4=90.1%, 8=5.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.658 issued rwts: total=3217,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.658 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:53.658 filename0: (groupid=0, jobs=1): err= 0: pid=92039: Fri Nov 8 04:10:26 2024 00:26:53.658 read: IOPS=255, BW=1020KiB/s (1045kB/s)(9.97MiB/10010msec) 00:26:53.658 slat (usec): min=4, max=12038, avg=20.38, stdev=286.09 00:26:53.658 clat (msec): min=14, max=142, avg=62.53, stdev=18.76 00:26:53.658 lat (msec): min=14, max=142, avg=62.55, stdev=18.76 00:26:53.658 clat percentiles (msec): 00:26:53.659 | 1.00th=[ 23], 5.00th=[ 32], 10.00th=[ 37], 20.00th=[ 48], 00:26:53.659 | 30.00th=[ 57], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 66], 00:26:53.659 | 70.00th=[ 71], 80.00th=[ 74], 90.00th=[ 85], 95.00th=[ 95], 00:26:53.659 | 99.00th=[ 118], 99.50th=[ 120], 99.90th=[ 142], 99.95th=[ 142], 00:26:53.659 | 99.99th=[ 142] 00:26:53.659 bw ( KiB/s): min= 768, max= 1536, per=3.76%, avg=999.84, stdev=147.84, samples=19 00:26:53.659 iops : min= 192, max= 384, avg=249.95, stdev=36.96, samples=19 00:26:53.659 lat (msec) : 20=0.24%, 50=24.99%, 100=71.37%, 250=3.41% 00:26:53.659 cpu : usr=32.90%, sys=0.36%, ctx=893, majf=0, minf=9 00:26:53.659 IO depths : 1=1.5%, 2=3.7%, 4=12.7%, 8=70.1%, 16=11.9%, 32=0.0%, >=64=0.0% 00:26:53.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.659 complete : 0=0.0%, 4=90.7%, 8=4.5%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.659 issued rwts: total=2553,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.659 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:53.659 filename0: (groupid=0, jobs=1): err= 0: pid=92040: Fri Nov 8 04:10:26 2024 00:26:53.659 read: IOPS=279, BW=1118KiB/s (1145kB/s)(11.0MiB/10037msec) 00:26:53.659 slat (usec): min=4, max=11020, avg=22.66, stdev=297.90 00:26:53.659 clat (msec): min=8, max=117, avg=57.08, stdev=18.84 00:26:53.659 lat (msec): min=8, max=117, avg=57.10, stdev=18.86 00:26:53.659 clat percentiles (msec): 00:26:53.659 | 1.00th=[ 11], 5.00th=[ 24], 10.00th=[ 35], 20.00th=[ 45], 00:26:53.659 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 59], 60.00th=[ 61], 00:26:53.659 | 70.00th=[ 63], 80.00th=[ 72], 90.00th=[ 82], 95.00th=[ 87], 00:26:53.659 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 118], 99.95th=[ 118], 00:26:53.659 | 99.99th=[ 118] 00:26:53.659 bw ( KiB/s): min= 864, max= 2195, per=4.20%, avg=1115.35, stdev=276.80, samples=20 00:26:53.659 iops : min= 216, max= 548, avg=278.80, stdev=69.05, samples=20 00:26:53.659 lat (msec) : 10=0.57%, 20=1.64%, 50=34.26%, 100=61.64%, 250=1.89% 00:26:53.659 cpu : usr=32.78%, sys=0.50%, ctx=881, majf=0, minf=9 00:26:53.659 IO depths : 1=1.1%, 2=2.5%, 4=10.5%, 8=73.5%, 16=12.3%, 32=0.0%, >=64=0.0% 00:26:53.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.659 complete : 0=0.0%, 4=90.1%, 8=5.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.659 issued rwts: total=2805,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.659 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:53.659 filename0: (groupid=0, jobs=1): err= 0: pid=92041: Fri Nov 8 04:10:26 2024 00:26:53.659 read: IOPS=260, BW=1043KiB/s (1068kB/s)(10.2MiB/10024msec) 00:26:53.659 slat (usec): min=4, max=8030, avg=25.24, stdev=313.36 00:26:53.659 clat (msec): min=27, max=142, avg=61.16, stdev=18.02 00:26:53.659 lat (msec): min=27, max=142, avg=61.18, stdev=18.02 00:26:53.659 clat percentiles (msec): 00:26:53.659 | 1.00th=[ 30], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 47], 00:26:53.659 | 30.00th=[ 51], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 62], 00:26:53.659 | 70.00th=[ 69], 80.00th=[ 73], 90.00th=[ 84], 95.00th=[ 92], 00:26:53.659 | 99.00th=[ 109], 99.50th=[ 130], 99.90th=[ 142], 99.95th=[ 142], 00:26:53.659 | 99.99th=[ 144] 00:26:53.659 bw ( KiB/s): min= 768, max= 1712, per=3.91%, avg=1040.35, stdev=211.38, samples=20 00:26:53.659 iops : min= 192, max= 428, avg=260.05, stdev=52.83, samples=20 00:26:53.659 lat (msec) : 50=28.23%, 100=68.94%, 250=2.83% 00:26:53.659 cpu : usr=32.44%, sys=0.46%, ctx=852, majf=0, minf=9 00:26:53.659 IO depths : 1=2.0%, 2=4.2%, 4=12.4%, 8=70.0%, 16=11.3%, 32=0.0%, >=64=0.0% 00:26:53.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.659 complete : 0=0.0%, 4=90.6%, 8=4.5%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.659 issued rwts: total=2614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.659 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:53.659 filename0: (groupid=0, jobs=1): err= 0: pid=92042: Fri Nov 8 04:10:26 2024 00:26:53.659 read: IOPS=252, BW=1011KiB/s (1035kB/s)(9.88MiB/10008msec) 00:26:53.659 slat (usec): min=4, max=8003, avg=19.20, stdev=195.78 00:26:53.659 clat (msec): min=16, max=135, avg=63.17, stdev=19.63 00:26:53.659 lat (msec): min=16, max=135, avg=63.19, stdev=19.63 00:26:53.659 clat percentiles (msec): 00:26:53.659 | 1.00th=[ 26], 5.00th=[ 33], 10.00th=[ 37], 20.00th=[ 48], 00:26:53.659 | 30.00th=[ 52], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 67], 00:26:53.659 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 86], 95.00th=[ 96], 00:26:53.659 | 99.00th=[ 118], 99.50th=[ 131], 99.90th=[ 136], 99.95th=[ 136], 00:26:53.659 | 99.99th=[ 136] 00:26:53.659 bw ( KiB/s): min= 768, max= 1488, per=3.76%, avg=998.58, stdev=174.54, samples=19 00:26:53.659 iops : min= 192, max= 372, avg=249.63, stdev=43.63, samples=19 00:26:53.659 lat (msec) : 20=0.63%, 50=28.07%, 100=67.69%, 250=3.60% 00:26:53.659 cpu : usr=33.83%, sys=0.46%, ctx=908, majf=0, minf=9 00:26:53.659 IO depths : 1=1.9%, 2=4.3%, 4=13.8%, 8=68.5%, 16=11.5%, 32=0.0%, >=64=0.0% 00:26:53.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.659 complete : 0=0.0%, 4=90.9%, 8=4.3%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.659 issued rwts: total=2529,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.659 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:53.659 filename0: (groupid=0, jobs=1): err= 0: pid=92043: Fri Nov 8 04:10:26 2024 00:26:53.659 read: IOPS=308, BW=1235KiB/s (1265kB/s)(12.1MiB/10039msec) 00:26:53.659 slat (usec): min=4, max=8023, avg=15.59, stdev=161.07 00:26:53.659 clat (msec): min=8, max=118, avg=51.64, stdev=19.38 00:26:53.659 lat (msec): min=8, max=118, avg=51.65, stdev=19.38 00:26:53.659 clat percentiles (msec): 00:26:53.659 | 1.00th=[ 14], 5.00th=[ 24], 10.00th=[ 32], 20.00th=[ 36], 00:26:53.659 | 30.00th=[ 40], 40.00th=[ 43], 50.00th=[ 48], 60.00th=[ 56], 00:26:53.659 | 70.00th=[ 60], 80.00th=[ 68], 90.00th=[ 79], 95.00th=[ 89], 00:26:53.659 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 117], 99.95th=[ 120], 00:26:53.659 | 99.99th=[ 120] 00:26:53.659 bw ( KiB/s): min= 768, max= 2176, per=4.64%, avg=1233.60, stdev=306.75, samples=20 00:26:53.659 iops : min= 192, max= 544, avg=308.40, stdev=76.69, samples=20 00:26:53.659 lat (msec) : 10=0.06%, 20=3.00%, 50=49.23%, 100=45.90%, 250=1.81% 00:26:53.659 cpu : usr=44.65%, sys=0.59%, ctx=1427, majf=0, minf=9 00:26:53.659 IO depths : 1=0.5%, 2=1.3%, 4=7.6%, 8=77.3%, 16=13.4%, 32=0.0%, >=64=0.0% 00:26:53.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.659 complete : 0=0.0%, 4=89.3%, 8=6.5%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.659 issued rwts: total=3100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.659 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:53.659 filename0: (groupid=0, jobs=1): err= 0: pid=92044: Fri Nov 8 04:10:26 2024 00:26:53.659 read: IOPS=258, BW=1034KiB/s (1059kB/s)(10.1MiB/10017msec) 00:26:53.659 slat (usec): min=4, max=8019, avg=17.78, stdev=176.73 00:26:53.659 clat (msec): min=20, max=146, avg=61.77, stdev=18.52 00:26:53.659 lat (msec): min=20, max=146, avg=61.78, stdev=18.52 00:26:53.659 clat percentiles (msec): 00:26:53.659 | 1.00th=[ 24], 5.00th=[ 31], 10.00th=[ 39], 20.00th=[ 48], 00:26:53.659 | 30.00th=[ 54], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 64], 00:26:53.659 | 70.00th=[ 70], 80.00th=[ 79], 90.00th=[ 84], 95.00th=[ 94], 00:26:53.659 | 99.00th=[ 112], 99.50th=[ 118], 99.90th=[ 146], 99.95th=[ 146], 00:26:53.659 | 99.99th=[ 146] 00:26:53.659 bw ( KiB/s): min= 768, max= 1712, per=3.85%, avg=1022.74, stdev=188.98, samples=19 00:26:53.659 iops : min= 192, max= 428, avg=255.68, stdev=47.25, samples=19 00:26:53.659 lat (msec) : 50=23.79%, 100=73.54%, 250=2.67% 00:26:53.659 cpu : usr=40.70%, sys=0.63%, ctx=1210, majf=0, minf=9 00:26:53.659 IO depths : 1=2.5%, 2=5.7%, 4=15.7%, 8=65.2%, 16=10.9%, 32=0.0%, >=64=0.0% 00:26:53.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.659 complete : 0=0.0%, 4=91.7%, 8=3.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.659 issued rwts: total=2589,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.659 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:53.659 filename0: (groupid=0, jobs=1): err= 0: pid=92045: Fri Nov 8 04:10:26 2024 00:26:53.659 read: IOPS=255, BW=1021KiB/s (1045kB/s)(9.98MiB/10016msec) 00:26:53.659 slat (usec): min=4, max=7047, avg=16.63, stdev=160.46 00:26:53.659 clat (msec): min=14, max=144, avg=62.58, stdev=20.44 00:26:53.659 lat (msec): min=16, max=144, avg=62.60, stdev=20.44 00:26:53.659 clat percentiles (msec): 00:26:53.659 | 1.00th=[ 25], 5.00th=[ 33], 10.00th=[ 39], 20.00th=[ 46], 00:26:53.659 | 30.00th=[ 51], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 65], 00:26:53.659 | 70.00th=[ 71], 80.00th=[ 79], 90.00th=[ 92], 95.00th=[ 100], 00:26:53.659 | 99.00th=[ 113], 99.50th=[ 130], 99.90th=[ 144], 99.95th=[ 144], 00:26:53.659 | 99.99th=[ 144] 00:26:53.659 bw ( KiB/s): min= 640, max= 1410, per=3.79%, avg=1008.95, stdev=165.15, samples=19 00:26:53.659 iops : min= 160, max= 352, avg=252.21, stdev=41.22, samples=19 00:26:53.659 lat (msec) : 20=0.63%, 50=29.19%, 100=65.26%, 250=4.93% 00:26:53.659 cpu : usr=35.40%, sys=0.38%, ctx=964, majf=0, minf=9 00:26:53.659 IO depths : 1=1.6%, 2=3.5%, 4=11.1%, 8=72.0%, 16=11.7%, 32=0.0%, >=64=0.0% 00:26:53.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.659 complete : 0=0.0%, 4=90.4%, 8=4.9%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.659 issued rwts: total=2556,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.659 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:53.659 filename1: (groupid=0, jobs=1): err= 0: pid=92046: Fri Nov 8 04:10:26 2024 00:26:53.659 read: IOPS=250, BW=1001KiB/s (1025kB/s)(9.77MiB/10001msec) 00:26:53.659 slat (usec): min=4, max=8059, avg=19.16, stdev=227.23 00:26:53.659 clat (msec): min=10, max=141, avg=63.82, stdev=19.60 00:26:53.659 lat (msec): min=10, max=141, avg=63.84, stdev=19.60 00:26:53.659 clat percentiles (msec): 00:26:53.659 | 1.00th=[ 27], 5.00th=[ 33], 10.00th=[ 37], 20.00th=[ 48], 00:26:53.659 | 30.00th=[ 57], 40.00th=[ 60], 50.00th=[ 61], 60.00th=[ 67], 00:26:53.659 | 70.00th=[ 72], 80.00th=[ 80], 90.00th=[ 86], 95.00th=[ 97], 00:26:53.659 | 99.00th=[ 126], 99.50th=[ 130], 99.90th=[ 142], 99.95th=[ 142], 00:26:53.659 | 99.99th=[ 142] 00:26:53.659 bw ( KiB/s): min= 763, max= 1280, per=3.68%, avg=979.11, stdev=118.96, samples=19 00:26:53.659 iops : min= 190, max= 320, avg=244.74, stdev=29.82, samples=19 00:26:53.659 lat (msec) : 20=0.64%, 50=22.18%, 100=72.74%, 250=4.44% 00:26:53.659 cpu : usr=33.01%, sys=0.48%, ctx=901, majf=0, minf=9 00:26:53.659 IO depths : 1=1.6%, 2=3.9%, 4=13.1%, 8=69.7%, 16=11.7%, 32=0.0%, >=64=0.0% 00:26:53.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.659 complete : 0=0.0%, 4=90.9%, 8=4.2%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.659 issued rwts: total=2502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.659 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:53.659 filename1: (groupid=0, jobs=1): err= 0: pid=92047: Fri Nov 8 04:10:26 2024 00:26:53.660 read: IOPS=303, BW=1214KiB/s (1244kB/s)(11.9MiB/10030msec) 00:26:53.660 slat (usec): min=3, max=7414, avg=17.78, stdev=179.09 00:26:53.660 clat (usec): min=1296, max=121610, avg=52530.97, stdev=22530.85 00:26:53.660 lat (usec): min=1303, max=121642, avg=52548.75, stdev=22534.99 00:26:53.660 clat percentiles (usec): 00:26:53.660 | 1.00th=[ 1467], 5.00th=[ 11338], 10.00th=[ 22414], 20.00th=[ 35390], 00:26:53.660 | 30.00th=[ 41157], 40.00th=[ 46924], 50.00th=[ 54264], 60.00th=[ 58983], 00:26:53.660 | 70.00th=[ 62653], 80.00th=[ 68682], 90.00th=[ 82314], 95.00th=[ 90702], 00:26:53.660 | 99.00th=[107480], 99.50th=[107480], 99.90th=[121111], 99.95th=[121111], 00:26:53.660 | 99.99th=[121111] 00:26:53.660 bw ( KiB/s): min= 848, max= 3328, per=4.57%, avg=1214.75, stdev=522.33, samples=20 00:26:53.660 iops : min= 212, max= 832, avg=303.65, stdev=130.60, samples=20 00:26:53.660 lat (msec) : 2=1.58%, 4=0.20%, 10=2.43%, 20=4.04%, 50=35.53% 00:26:53.660 lat (msec) : 100=53.60%, 250=2.63% 00:26:53.660 cpu : usr=45.07%, sys=0.75%, ctx=1962, majf=0, minf=9 00:26:53.660 IO depths : 1=1.0%, 2=2.1%, 4=8.7%, 8=75.1%, 16=13.0%, 32=0.0%, >=64=0.0% 00:26:53.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.660 complete : 0=0.0%, 4=89.7%, 8=6.1%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.660 issued rwts: total=3045,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.660 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:53.660 filename1: (groupid=0, jobs=1): err= 0: pid=92048: Fri Nov 8 04:10:26 2024 00:26:53.660 read: IOPS=269, BW=1077KiB/s (1103kB/s)(10.5MiB/10001msec) 00:26:53.660 slat (usec): min=3, max=4023, avg=13.91, stdev=77.64 00:26:53.660 clat (msec): min=18, max=126, avg=59.30, stdev=17.77 00:26:53.660 lat (msec): min=18, max=126, avg=59.32, stdev=17.77 00:26:53.660 clat percentiles (msec): 00:26:53.660 | 1.00th=[ 23], 5.00th=[ 33], 10.00th=[ 37], 20.00th=[ 46], 00:26:53.660 | 30.00th=[ 51], 40.00th=[ 56], 50.00th=[ 58], 60.00th=[ 62], 00:26:53.660 | 70.00th=[ 66], 80.00th=[ 73], 90.00th=[ 83], 95.00th=[ 91], 00:26:53.660 | 99.00th=[ 109], 99.50th=[ 116], 99.90th=[ 127], 99.95th=[ 127], 00:26:53.660 | 99.99th=[ 127] 00:26:53.660 bw ( KiB/s): min= 763, max= 1568, per=4.01%, avg=1066.68, stdev=204.22, samples=19 00:26:53.660 iops : min= 190, max= 392, avg=266.63, stdev=51.12, samples=19 00:26:53.660 lat (msec) : 20=0.59%, 50=28.69%, 100=67.63%, 250=3.08% 00:26:53.660 cpu : usr=42.54%, sys=0.50%, ctx=1205, majf=0, minf=9 00:26:53.660 IO depths : 1=1.9%, 2=4.2%, 4=13.2%, 8=69.4%, 16=11.4%, 32=0.0%, >=64=0.0% 00:26:53.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.660 complete : 0=0.0%, 4=90.8%, 8=4.3%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.660 issued rwts: total=2694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.660 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:53.660 filename1: (groupid=0, jobs=1): err= 0: pid=92049: Fri Nov 8 04:10:26 2024 00:26:53.660 read: IOPS=305, BW=1221KiB/s (1250kB/s)(12.0MiB/10035msec) 00:26:53.660 slat (usec): min=3, max=8020, avg=19.11, stdev=207.64 00:26:53.660 clat (msec): min=3, max=121, avg=52.24, stdev=19.15 00:26:53.660 lat (msec): min=3, max=122, avg=52.26, stdev=19.15 00:26:53.660 clat percentiles (msec): 00:26:53.660 | 1.00th=[ 7], 5.00th=[ 18], 10.00th=[ 30], 20.00th=[ 37], 00:26:53.660 | 30.00th=[ 42], 40.00th=[ 48], 50.00th=[ 53], 60.00th=[ 58], 00:26:53.660 | 70.00th=[ 61], 80.00th=[ 68], 90.00th=[ 74], 95.00th=[ 82], 00:26:53.660 | 99.00th=[ 104], 99.50th=[ 116], 99.90th=[ 123], 99.95th=[ 123], 00:26:53.660 | 99.99th=[ 123] 00:26:53.660 bw ( KiB/s): min= 896, max= 2608, per=4.58%, avg=1218.65, stdev=353.81, samples=20 00:26:53.660 iops : min= 224, max= 652, avg=304.65, stdev=88.46, samples=20 00:26:53.660 lat (msec) : 4=0.16%, 10=1.40%, 20=3.85%, 50=38.30%, 100=54.36% 00:26:53.660 lat (msec) : 250=1.93% 00:26:53.660 cpu : usr=43.12%, sys=0.60%, ctx=1326, majf=0, minf=9 00:26:53.660 IO depths : 1=1.2%, 2=2.7%, 4=10.1%, 8=73.6%, 16=12.3%, 32=0.0%, >=64=0.0% 00:26:53.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.660 complete : 0=0.0%, 4=90.0%, 8=5.4%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.660 issued rwts: total=3063,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.660 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:53.660 filename1: (groupid=0, jobs=1): err= 0: pid=92050: Fri Nov 8 04:10:26 2024 00:26:53.660 read: IOPS=252, BW=1010KiB/s (1034kB/s)(9.88MiB/10020msec) 00:26:53.660 slat (usec): min=3, max=8029, avg=24.47, stdev=292.16 00:26:53.660 clat (msec): min=14, max=122, avg=63.18, stdev=18.84 00:26:53.660 lat (msec): min=14, max=122, avg=63.20, stdev=18.84 00:26:53.660 clat percentiles (msec): 00:26:53.660 | 1.00th=[ 18], 5.00th=[ 26], 10.00th=[ 35], 20.00th=[ 53], 00:26:53.660 | 30.00th=[ 57], 40.00th=[ 60], 50.00th=[ 62], 60.00th=[ 67], 00:26:53.660 | 70.00th=[ 71], 80.00th=[ 80], 90.00th=[ 87], 95.00th=[ 93], 00:26:53.660 | 99.00th=[ 108], 99.50th=[ 120], 99.90th=[ 120], 99.95th=[ 123], 00:26:53.660 | 99.99th=[ 123] 00:26:53.660 bw ( KiB/s): min= 766, max= 1920, per=3.78%, avg=1004.53, stdev=240.09, samples=19 00:26:53.660 iops : min= 191, max= 480, avg=251.11, stdev=60.05, samples=19 00:26:53.660 lat (msec) : 20=1.15%, 50=15.34%, 100=81.98%, 250=1.54% 00:26:53.660 cpu : usr=40.36%, sys=0.58%, ctx=1362, majf=0, minf=9 00:26:53.660 IO depths : 1=3.0%, 2=7.0%, 4=18.6%, 8=61.6%, 16=9.9%, 32=0.0%, >=64=0.0% 00:26:53.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.660 complete : 0=0.0%, 4=92.4%, 8=2.2%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.660 issued rwts: total=2530,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.660 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:53.660 filename1: (groupid=0, jobs=1): err= 0: pid=92051: Fri Nov 8 04:10:26 2024 00:26:53.660 read: IOPS=256, BW=1028KiB/s (1052kB/s)(10.0MiB/10010msec) 00:26:53.660 slat (usec): min=4, max=4024, avg=18.27, stdev=136.83 00:26:53.660 clat (msec): min=23, max=135, avg=62.15, stdev=18.63 00:26:53.660 lat (msec): min=23, max=135, avg=62.17, stdev=18.63 00:26:53.660 clat percentiles (msec): 00:26:53.660 | 1.00th=[ 31], 5.00th=[ 32], 10.00th=[ 40], 20.00th=[ 48], 00:26:53.660 | 30.00th=[ 53], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 64], 00:26:53.660 | 70.00th=[ 70], 80.00th=[ 75], 90.00th=[ 87], 95.00th=[ 97], 00:26:53.660 | 99.00th=[ 114], 99.50th=[ 117], 99.90th=[ 136], 99.95th=[ 136], 00:26:53.660 | 99.99th=[ 136] 00:26:53.660 bw ( KiB/s): min= 768, max= 1456, per=3.84%, avg=1019.89, stdev=154.26, samples=19 00:26:53.660 iops : min= 192, max= 364, avg=254.95, stdev=38.59, samples=19 00:26:53.660 lat (msec) : 50=25.58%, 100=70.92%, 250=3.50% 00:26:53.660 cpu : usr=43.27%, sys=0.55%, ctx=1138, majf=0, minf=9 00:26:53.660 IO depths : 1=2.4%, 2=5.2%, 4=14.2%, 8=67.3%, 16=10.9%, 32=0.0%, >=64=0.0% 00:26:53.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.660 complete : 0=0.0%, 4=91.3%, 8=3.8%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.660 issued rwts: total=2572,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.660 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:53.660 filename1: (groupid=0, jobs=1): err= 0: pid=92052: Fri Nov 8 04:10:26 2024 00:26:53.660 read: IOPS=288, BW=1155KiB/s (1182kB/s)(11.3MiB/10008msec) 00:26:53.660 slat (usec): min=4, max=4018, avg=14.75, stdev=95.20 00:26:53.660 clat (msec): min=14, max=131, avg=55.33, stdev=17.79 00:26:53.660 lat (msec): min=14, max=131, avg=55.35, stdev=17.80 00:26:53.660 clat percentiles (msec): 00:26:53.660 | 1.00th=[ 24], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 40], 00:26:53.660 | 30.00th=[ 45], 40.00th=[ 50], 50.00th=[ 55], 60.00th=[ 57], 00:26:53.660 | 70.00th=[ 62], 80.00th=[ 70], 90.00th=[ 81], 95.00th=[ 91], 00:26:53.660 | 99.00th=[ 107], 99.50th=[ 112], 99.90th=[ 122], 99.95th=[ 132], 00:26:53.660 | 99.99th=[ 132] 00:26:53.660 bw ( KiB/s): min= 896, max= 1619, per=4.29%, avg=1141.63, stdev=181.22, samples=19 00:26:53.660 iops : min= 224, max= 404, avg=285.37, stdev=45.20, samples=19 00:26:53.660 lat (msec) : 20=0.21%, 50=41.54%, 100=56.66%, 250=1.59% 00:26:53.660 cpu : usr=43.49%, sys=0.67%, ctx=1375, majf=0, minf=9 00:26:53.660 IO depths : 1=1.2%, 2=2.7%, 4=10.6%, 8=73.3%, 16=12.2%, 32=0.0%, >=64=0.0% 00:26:53.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.660 complete : 0=0.0%, 4=90.1%, 8=5.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.660 issued rwts: total=2889,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.660 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:53.660 filename1: (groupid=0, jobs=1): err= 0: pid=92053: Fri Nov 8 04:10:26 2024 00:26:53.660 read: IOPS=247, BW=989KiB/s (1013kB/s)(9892KiB/10004msec) 00:26:53.660 slat (nsec): min=4813, max=58450, avg=12595.03, stdev=7652.88 00:26:53.660 clat (msec): min=4, max=158, avg=64.64, stdev=20.71 00:26:53.660 lat (msec): min=4, max=158, avg=64.65, stdev=20.71 00:26:53.660 clat percentiles (msec): 00:26:53.660 | 1.00th=[ 23], 5.00th=[ 32], 10.00th=[ 42], 20.00th=[ 49], 00:26:53.660 | 30.00th=[ 57], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 69], 00:26:53.660 | 70.00th=[ 72], 80.00th=[ 80], 90.00th=[ 93], 95.00th=[ 105], 00:26:53.660 | 99.00th=[ 132], 99.50th=[ 136], 99.90th=[ 159], 99.95th=[ 159], 00:26:53.660 | 99.99th=[ 159] 00:26:53.660 bw ( KiB/s): min= 672, max= 1456, per=3.64%, avg=967.16, stdev=157.85, samples=19 00:26:53.660 iops : min= 168, max= 364, avg=241.79, stdev=39.46, samples=19 00:26:53.660 lat (msec) : 10=0.65%, 20=0.16%, 50=20.70%, 100=73.03%, 250=5.46% 00:26:53.660 cpu : usr=33.39%, sys=0.52%, ctx=909, majf=0, minf=9 00:26:53.660 IO depths : 1=1.6%, 2=3.7%, 4=12.7%, 8=70.3%, 16=11.7%, 32=0.0%, >=64=0.0% 00:26:53.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.660 complete : 0=0.0%, 4=90.8%, 8=4.3%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.660 issued rwts: total=2473,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.660 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:53.660 filename2: (groupid=0, jobs=1): err= 0: pid=92054: Fri Nov 8 04:10:26 2024 00:26:53.660 read: IOPS=322, BW=1289KiB/s (1320kB/s)(12.6MiB/10041msec) 00:26:53.660 slat (usec): min=5, max=6806, avg=14.43, stdev=133.29 00:26:53.660 clat (msec): min=8, max=108, avg=49.52, stdev=17.68 00:26:53.660 lat (msec): min=8, max=108, avg=49.53, stdev=17.68 00:26:53.660 clat percentiles (msec): 00:26:53.660 | 1.00th=[ 16], 5.00th=[ 17], 10.00th=[ 26], 20.00th=[ 36], 00:26:53.660 | 30.00th=[ 41], 40.00th=[ 45], 50.00th=[ 48], 60.00th=[ 54], 00:26:53.660 | 70.00th=[ 59], 80.00th=[ 64], 90.00th=[ 72], 95.00th=[ 81], 00:26:53.660 | 99.00th=[ 95], 99.50th=[ 95], 99.90th=[ 108], 99.95th=[ 109], 00:26:53.660 | 99.99th=[ 109] 00:26:53.660 bw ( KiB/s): min= 952, max= 2810, per=4.85%, avg=1289.70, stdev=393.75, samples=20 00:26:53.660 iops : min= 238, max= 702, avg=322.40, stdev=98.34, samples=20 00:26:53.660 lat (msec) : 10=0.49%, 20=5.94%, 50=49.15%, 100=44.14%, 250=0.28% 00:26:53.661 cpu : usr=42.55%, sys=0.63%, ctx=1141, majf=0, minf=9 00:26:53.661 IO depths : 1=1.0%, 2=2.1%, 4=8.3%, 8=75.9%, 16=12.9%, 32=0.0%, >=64=0.0% 00:26:53.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.661 complete : 0=0.0%, 4=89.6%, 8=6.0%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.661 issued rwts: total=3235,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.661 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:53.661 filename2: (groupid=0, jobs=1): err= 0: pid=92055: Fri Nov 8 04:10:26 2024 00:26:53.661 read: IOPS=274, BW=1097KiB/s (1123kB/s)(10.7MiB/10017msec) 00:26:53.661 slat (usec): min=4, max=8027, avg=20.83, stdev=254.26 00:26:53.661 clat (msec): min=7, max=154, avg=58.20, stdev=19.52 00:26:53.661 lat (msec): min=7, max=154, avg=58.23, stdev=19.52 00:26:53.661 clat percentiles (msec): 00:26:53.661 | 1.00th=[ 17], 5.00th=[ 25], 10.00th=[ 34], 20.00th=[ 43], 00:26:53.661 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 59], 60.00th=[ 62], 00:26:53.661 | 70.00th=[ 67], 80.00th=[ 73], 90.00th=[ 83], 95.00th=[ 89], 00:26:53.661 | 99.00th=[ 111], 99.50th=[ 128], 99.90th=[ 155], 99.95th=[ 155], 00:26:53.661 | 99.99th=[ 155] 00:26:53.661 bw ( KiB/s): min= 768, max= 2152, per=4.08%, avg=1084.21, stdev=286.67, samples=19 00:26:53.661 iops : min= 192, max= 538, avg=271.05, stdev=71.67, samples=19 00:26:53.661 lat (msec) : 10=0.33%, 20=1.49%, 50=31.42%, 100=64.32%, 250=2.44% 00:26:53.661 cpu : usr=37.87%, sys=0.55%, ctx=993, majf=0, minf=9 00:26:53.661 IO depths : 1=2.0%, 2=4.5%, 4=13.2%, 8=69.0%, 16=11.4%, 32=0.0%, >=64=0.0% 00:26:53.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.661 complete : 0=0.0%, 4=91.0%, 8=4.2%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.661 issued rwts: total=2747,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.661 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:53.661 filename2: (groupid=0, jobs=1): err= 0: pid=92056: Fri Nov 8 04:10:26 2024 00:26:53.661 read: IOPS=288, BW=1154KiB/s (1181kB/s)(11.3MiB/10020msec) 00:26:53.661 slat (usec): min=4, max=4038, avg=17.36, stdev=133.93 00:26:53.661 clat (msec): min=23, max=117, avg=55.31, stdev=17.04 00:26:53.661 lat (msec): min=23, max=117, avg=55.33, stdev=17.04 00:26:53.661 clat percentiles (msec): 00:26:53.661 | 1.00th=[ 26], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 41], 00:26:53.661 | 30.00th=[ 46], 40.00th=[ 51], 50.00th=[ 55], 60.00th=[ 58], 00:26:53.661 | 70.00th=[ 62], 80.00th=[ 69], 90.00th=[ 79], 95.00th=[ 87], 00:26:53.661 | 99.00th=[ 105], 99.50th=[ 118], 99.90th=[ 118], 99.95th=[ 118], 00:26:53.661 | 99.99th=[ 118] 00:26:53.661 bw ( KiB/s): min= 896, max= 1667, per=4.34%, avg=1153.75, stdev=203.19, samples=20 00:26:53.661 iops : min= 224, max= 416, avg=288.40, stdev=50.70, samples=20 00:26:53.661 lat (msec) : 50=39.69%, 100=58.13%, 250=2.18% 00:26:53.661 cpu : usr=42.68%, sys=0.68%, ctx=1577, majf=0, minf=9 00:26:53.661 IO depths : 1=1.7%, 2=3.8%, 4=12.2%, 8=70.7%, 16=11.6%, 32=0.0%, >=64=0.0% 00:26:53.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.661 complete : 0=0.0%, 4=90.7%, 8=4.5%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.661 issued rwts: total=2890,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.661 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:53.661 filename2: (groupid=0, jobs=1): err= 0: pid=92057: Fri Nov 8 04:10:26 2024 00:26:53.661 read: IOPS=284, BW=1140KiB/s (1167kB/s)(11.2MiB/10025msec) 00:26:53.661 slat (usec): min=5, max=8031, avg=22.03, stdev=270.24 00:26:53.661 clat (msec): min=8, max=110, avg=55.92, stdev=18.09 00:26:53.661 lat (msec): min=8, max=110, avg=55.94, stdev=18.10 00:26:53.661 clat percentiles (msec): 00:26:53.661 | 1.00th=[ 16], 5.00th=[ 32], 10.00th=[ 34], 20.00th=[ 39], 00:26:53.661 | 30.00th=[ 46], 40.00th=[ 50], 50.00th=[ 57], 60.00th=[ 61], 00:26:53.661 | 70.00th=[ 64], 80.00th=[ 72], 90.00th=[ 80], 95.00th=[ 91], 00:26:53.661 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 111], 99.95th=[ 111], 00:26:53.661 | 99.99th=[ 111] 00:26:53.661 bw ( KiB/s): min= 760, max= 1843, per=4.29%, avg=1140.15, stdev=259.91, samples=20 00:26:53.661 iops : min= 190, max= 460, avg=285.00, stdev=64.87, samples=20 00:26:53.661 lat (msec) : 10=0.07%, 20=1.05%, 50=41.35%, 100=55.88%, 250=1.65% 00:26:53.661 cpu : usr=33.33%, sys=0.51%, ctx=896, majf=0, minf=9 00:26:53.661 IO depths : 1=0.5%, 2=1.1%, 4=8.7%, 8=76.8%, 16=12.9%, 32=0.0%, >=64=0.0% 00:26:53.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.661 complete : 0=0.0%, 4=89.4%, 8=6.0%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.661 issued rwts: total=2856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.661 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:53.661 filename2: (groupid=0, jobs=1): err= 0: pid=92058: Fri Nov 8 04:10:26 2024 00:26:53.661 read: IOPS=263, BW=1053KiB/s (1078kB/s)(10.3MiB/10039msec) 00:26:53.661 slat (usec): min=4, max=8048, avg=18.44, stdev=220.91 00:26:53.661 clat (msec): min=23, max=143, avg=60.59, stdev=18.91 00:26:53.661 lat (msec): min=23, max=143, avg=60.61, stdev=18.92 00:26:53.661 clat percentiles (msec): 00:26:53.661 | 1.00th=[ 27], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 47], 00:26:53.661 | 30.00th=[ 49], 40.00th=[ 58], 50.00th=[ 59], 60.00th=[ 62], 00:26:53.661 | 70.00th=[ 70], 80.00th=[ 74], 90.00th=[ 85], 95.00th=[ 95], 00:26:53.661 | 99.00th=[ 108], 99.50th=[ 116], 99.90th=[ 144], 99.95th=[ 144], 00:26:53.661 | 99.99th=[ 144] 00:26:53.661 bw ( KiB/s): min= 768, max= 1536, per=3.96%, avg=1053.20, stdev=179.49, samples=20 00:26:53.661 iops : min= 192, max= 384, avg=263.30, stdev=44.87, samples=20 00:26:53.661 lat (msec) : 50=33.48%, 100=63.87%, 250=2.65% 00:26:53.661 cpu : usr=32.32%, sys=0.44%, ctx=846, majf=0, minf=9 00:26:53.661 IO depths : 1=0.9%, 2=2.4%, 4=11.0%, 8=72.8%, 16=12.9%, 32=0.0%, >=64=0.0% 00:26:53.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.661 complete : 0=0.0%, 4=90.5%, 8=5.2%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.661 issued rwts: total=2643,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.661 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:53.661 filename2: (groupid=0, jobs=1): err= 0: pid=92059: Fri Nov 8 04:10:26 2024 00:26:53.661 read: IOPS=289, BW=1156KiB/s (1184kB/s)(11.3MiB/10017msec) 00:26:53.661 slat (usec): min=4, max=8034, avg=15.58, stdev=149.29 00:26:53.661 clat (msec): min=17, max=130, avg=55.25, stdev=18.10 00:26:53.661 lat (msec): min=17, max=130, avg=55.27, stdev=18.10 00:26:53.661 clat percentiles (msec): 00:26:53.661 | 1.00th=[ 24], 5.00th=[ 26], 10.00th=[ 35], 20.00th=[ 37], 00:26:53.661 | 30.00th=[ 47], 40.00th=[ 48], 50.00th=[ 58], 60.00th=[ 60], 00:26:53.661 | 70.00th=[ 62], 80.00th=[ 71], 90.00th=[ 81], 95.00th=[ 85], 00:26:53.661 | 99.00th=[ 101], 99.50th=[ 108], 99.90th=[ 131], 99.95th=[ 131], 00:26:53.661 | 99.99th=[ 131] 00:26:53.661 bw ( KiB/s): min= 848, max= 1792, per=4.34%, avg=1152.00, stdev=206.04, samples=19 00:26:53.661 iops : min= 212, max= 448, avg=288.00, stdev=51.51, samples=19 00:26:53.661 lat (msec) : 20=0.55%, 50=42.82%, 100=55.73%, 250=0.90% 00:26:53.661 cpu : usr=32.77%, sys=0.42%, ctx=871, majf=0, minf=9 00:26:53.661 IO depths : 1=0.9%, 2=2.2%, 4=8.6%, 8=75.1%, 16=13.2%, 32=0.0%, >=64=0.0% 00:26:53.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.661 complete : 0=0.0%, 4=89.9%, 8=6.1%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.661 issued rwts: total=2896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.661 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:53.661 filename2: (groupid=0, jobs=1): err= 0: pid=92060: Fri Nov 8 04:10:26 2024 00:26:53.661 read: IOPS=287, BW=1149KiB/s (1177kB/s)(11.3MiB/10031msec) 00:26:53.661 slat (usec): min=3, max=9066, avg=17.12, stdev=184.57 00:26:53.661 clat (msec): min=23, max=154, avg=55.54, stdev=17.61 00:26:53.661 lat (msec): min=23, max=154, avg=55.56, stdev=17.61 00:26:53.661 clat percentiles (msec): 00:26:53.661 | 1.00th=[ 24], 5.00th=[ 25], 10.00th=[ 35], 20.00th=[ 40], 00:26:53.661 | 30.00th=[ 47], 40.00th=[ 51], 50.00th=[ 56], 60.00th=[ 60], 00:26:53.661 | 70.00th=[ 63], 80.00th=[ 70], 90.00th=[ 80], 95.00th=[ 84], 00:26:53.661 | 99.00th=[ 108], 99.50th=[ 131], 99.90th=[ 155], 99.95th=[ 155], 00:26:53.661 | 99.99th=[ 155] 00:26:53.661 bw ( KiB/s): min= 896, max= 1920, per=4.32%, avg=1148.40, stdev=217.27, samples=20 00:26:53.661 iops : min= 224, max= 480, avg=287.10, stdev=54.32, samples=20 00:26:53.661 lat (msec) : 50=39.10%, 100=59.78%, 250=1.11% 00:26:53.661 cpu : usr=38.75%, sys=0.56%, ctx=1087, majf=0, minf=9 00:26:53.661 IO depths : 1=1.6%, 2=3.3%, 4=9.9%, 8=73.5%, 16=11.7%, 32=0.0%, >=64=0.0% 00:26:53.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.661 complete : 0=0.0%, 4=90.3%, 8=4.9%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.661 issued rwts: total=2882,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.661 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:53.661 filename2: (groupid=0, jobs=1): err= 0: pid=92061: Fri Nov 8 04:10:26 2024 00:26:53.661 read: IOPS=280, BW=1123KiB/s (1150kB/s)(11.0MiB/10027msec) 00:26:53.661 slat (usec): min=4, max=8028, avg=22.52, stdev=262.47 00:26:53.661 clat (msec): min=16, max=147, avg=56.83, stdev=18.84 00:26:53.661 lat (msec): min=16, max=147, avg=56.85, stdev=18.84 00:26:53.661 clat percentiles (msec): 00:26:53.661 | 1.00th=[ 24], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 40], 00:26:53.661 | 30.00th=[ 47], 40.00th=[ 52], 50.00th=[ 57], 60.00th=[ 61], 00:26:53.661 | 70.00th=[ 64], 80.00th=[ 71], 90.00th=[ 82], 95.00th=[ 91], 00:26:53.661 | 99.00th=[ 111], 99.50th=[ 120], 99.90th=[ 148], 99.95th=[ 148], 00:26:53.661 | 99.99th=[ 148] 00:26:53.661 bw ( KiB/s): min= 768, max= 1888, per=4.21%, avg=1119.15, stdev=238.72, samples=20 00:26:53.661 iops : min= 192, max= 472, avg=279.75, stdev=59.69, samples=20 00:26:53.661 lat (msec) : 20=0.57%, 50=37.98%, 100=58.54%, 250=2.91% 00:26:53.661 cpu : usr=39.65%, sys=0.68%, ctx=1152, majf=0, minf=9 00:26:53.661 IO depths : 1=1.1%, 2=2.6%, 4=9.7%, 8=73.9%, 16=12.8%, 32=0.0%, >=64=0.0% 00:26:53.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.661 complete : 0=0.0%, 4=90.2%, 8=5.5%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.661 issued rwts: total=2815,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.661 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:53.661 00:26:53.661 Run status group 0 (all jobs): 00:26:53.661 READ: bw=25.9MiB/s (27.2MB/s), 989KiB/s-1289KiB/s (1013kB/s-1320kB/s), io=261MiB (273MB), run=10001-10041msec 00:26:53.661 04:10:26 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:53.661 04:10:26 -- target/dif.sh@43 -- # local sub 00:26:53.661 04:10:26 -- target/dif.sh@45 -- # for sub in "$@" 00:26:53.661 04:10:26 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:53.661 04:10:26 -- target/dif.sh@36 -- # local sub_id=0 00:26:53.661 04:10:26 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:53.661 04:10:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.661 04:10:26 -- common/autotest_common.sh@10 -- # set +x 00:26:53.662 04:10:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.662 04:10:26 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:53.662 04:10:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.662 04:10:26 -- common/autotest_common.sh@10 -- # set +x 00:26:53.662 04:10:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.662 04:10:26 -- target/dif.sh@45 -- # for sub in "$@" 00:26:53.662 04:10:26 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:53.662 04:10:26 -- target/dif.sh@36 -- # local sub_id=1 00:26:53.662 04:10:26 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:53.662 04:10:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.662 04:10:26 -- common/autotest_common.sh@10 -- # set +x 00:26:53.662 04:10:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.662 04:10:26 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:53.662 04:10:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.662 04:10:26 -- common/autotest_common.sh@10 -- # set +x 00:26:53.662 04:10:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.662 04:10:26 -- target/dif.sh@45 -- # for sub in "$@" 00:26:53.662 04:10:26 -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:53.662 04:10:26 -- target/dif.sh@36 -- # local sub_id=2 00:26:53.662 04:10:26 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:53.662 04:10:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.662 04:10:26 -- common/autotest_common.sh@10 -- # set +x 00:26:53.662 04:10:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.662 04:10:26 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:53.662 04:10:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.662 04:10:26 -- common/autotest_common.sh@10 -- # set +x 00:26:53.662 04:10:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.662 04:10:26 -- target/dif.sh@115 -- # NULL_DIF=1 00:26:53.662 04:10:26 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:53.662 04:10:26 -- target/dif.sh@115 -- # numjobs=2 00:26:53.662 04:10:26 -- target/dif.sh@115 -- # iodepth=8 00:26:53.662 04:10:26 -- target/dif.sh@115 -- # runtime=5 00:26:53.662 04:10:26 -- target/dif.sh@115 -- # files=1 00:26:53.662 04:10:26 -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:53.662 04:10:26 -- target/dif.sh@28 -- # local sub 00:26:53.662 04:10:26 -- target/dif.sh@30 -- # for sub in "$@" 00:26:53.662 04:10:26 -- target/dif.sh@31 -- # create_subsystem 0 00:26:53.662 04:10:26 -- target/dif.sh@18 -- # local sub_id=0 00:26:53.662 04:10:26 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:53.662 04:10:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.662 04:10:26 -- common/autotest_common.sh@10 -- # set +x 00:26:53.662 bdev_null0 00:26:53.662 04:10:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.662 04:10:26 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:53.662 04:10:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.662 04:10:26 -- common/autotest_common.sh@10 -- # set +x 00:26:53.662 04:10:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.662 04:10:26 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:53.662 04:10:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.662 04:10:26 -- common/autotest_common.sh@10 -- # set +x 00:26:53.662 04:10:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.662 04:10:26 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:53.662 04:10:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.662 04:10:26 -- common/autotest_common.sh@10 -- # set +x 00:26:53.662 [2024-11-08 04:10:26.994581] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:53.662 04:10:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.662 04:10:26 -- target/dif.sh@30 -- # for sub in "$@" 00:26:53.662 04:10:26 -- target/dif.sh@31 -- # create_subsystem 1 00:26:53.662 04:10:26 -- target/dif.sh@18 -- # local sub_id=1 00:26:53.662 04:10:26 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:53.662 04:10:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.662 04:10:26 -- common/autotest_common.sh@10 -- # set +x 00:26:53.662 bdev_null1 00:26:53.662 04:10:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.662 04:10:27 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:53.662 04:10:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.662 04:10:27 -- common/autotest_common.sh@10 -- # set +x 00:26:53.662 04:10:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.662 04:10:27 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:53.662 04:10:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.662 04:10:27 -- common/autotest_common.sh@10 -- # set +x 00:26:53.662 04:10:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.662 04:10:27 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:53.662 04:10:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.662 04:10:27 -- common/autotest_common.sh@10 -- # set +x 00:26:53.662 04:10:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.662 04:10:27 -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:53.662 04:10:27 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:53.662 04:10:27 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:53.662 04:10:27 -- nvmf/common.sh@520 -- # config=() 00:26:53.662 04:10:27 -- nvmf/common.sh@520 -- # local subsystem config 00:26:53.662 04:10:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:53.662 04:10:27 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:53.662 04:10:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:53.662 { 00:26:53.662 "params": { 00:26:53.662 "name": "Nvme$subsystem", 00:26:53.662 "trtype": "$TEST_TRANSPORT", 00:26:53.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:53.662 "adrfam": "ipv4", 00:26:53.662 "trsvcid": "$NVMF_PORT", 00:26:53.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:53.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:53.662 "hdgst": ${hdgst:-false}, 00:26:53.662 "ddgst": ${ddgst:-false} 00:26:53.662 }, 00:26:53.662 "method": "bdev_nvme_attach_controller" 00:26:53.662 } 00:26:53.662 EOF 00:26:53.662 )") 00:26:53.662 04:10:27 -- target/dif.sh@82 -- # gen_fio_conf 00:26:53.662 04:10:27 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:53.662 04:10:27 -- target/dif.sh@54 -- # local file 00:26:53.662 04:10:27 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:53.662 04:10:27 -- target/dif.sh@56 -- # cat 00:26:53.662 04:10:27 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:53.662 04:10:27 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:53.662 04:10:27 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:53.662 04:10:27 -- nvmf/common.sh@542 -- # cat 00:26:53.662 04:10:27 -- common/autotest_common.sh@1330 -- # shift 00:26:53.662 04:10:27 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:53.662 04:10:27 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:53.662 04:10:27 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:53.662 04:10:27 -- target/dif.sh@72 -- # (( file <= files )) 00:26:53.662 04:10:27 -- target/dif.sh@73 -- # cat 00:26:53.662 04:10:27 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:53.662 04:10:27 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:53.662 04:10:27 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:53.662 04:10:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:53.662 04:10:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:53.662 { 00:26:53.662 "params": { 00:26:53.662 "name": "Nvme$subsystem", 00:26:53.662 "trtype": "$TEST_TRANSPORT", 00:26:53.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:53.662 "adrfam": "ipv4", 00:26:53.662 "trsvcid": "$NVMF_PORT", 00:26:53.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:53.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:53.662 "hdgst": ${hdgst:-false}, 00:26:53.662 "ddgst": ${ddgst:-false} 00:26:53.662 }, 00:26:53.662 "method": "bdev_nvme_attach_controller" 00:26:53.662 } 00:26:53.662 EOF 00:26:53.662 )") 00:26:53.662 04:10:27 -- nvmf/common.sh@542 -- # cat 00:26:53.662 04:10:27 -- target/dif.sh@72 -- # (( file++ )) 00:26:53.662 04:10:27 -- target/dif.sh@72 -- # (( file <= files )) 00:26:53.662 04:10:27 -- nvmf/common.sh@544 -- # jq . 00:26:53.662 04:10:27 -- nvmf/common.sh@545 -- # IFS=, 00:26:53.662 04:10:27 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:53.662 "params": { 00:26:53.662 "name": "Nvme0", 00:26:53.662 "trtype": "tcp", 00:26:53.662 "traddr": "10.0.0.2", 00:26:53.662 "adrfam": "ipv4", 00:26:53.662 "trsvcid": "4420", 00:26:53.662 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:53.662 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:53.662 "hdgst": false, 00:26:53.662 "ddgst": false 00:26:53.662 }, 00:26:53.662 "method": "bdev_nvme_attach_controller" 00:26:53.662 },{ 00:26:53.662 "params": { 00:26:53.662 "name": "Nvme1", 00:26:53.662 "trtype": "tcp", 00:26:53.662 "traddr": "10.0.0.2", 00:26:53.662 "adrfam": "ipv4", 00:26:53.662 "trsvcid": "4420", 00:26:53.662 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:53.662 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:53.662 "hdgst": false, 00:26:53.662 "ddgst": false 00:26:53.662 }, 00:26:53.662 "method": "bdev_nvme_attach_controller" 00:26:53.662 }' 00:26:53.662 04:10:27 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:53.662 04:10:27 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:53.662 04:10:27 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:53.662 04:10:27 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:53.662 04:10:27 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:53.662 04:10:27 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:53.662 04:10:27 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:53.662 04:10:27 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:53.662 04:10:27 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:53.662 04:10:27 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:53.663 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:53.663 ... 00:26:53.663 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:53.663 ... 00:26:53.663 fio-3.35 00:26:53.663 Starting 4 threads 00:26:53.663 [2024-11-08 04:10:27.738824] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:53.663 [2024-11-08 04:10:27.738896] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:57.849 00:26:57.849 filename0: (groupid=0, jobs=1): err= 0: pid=92196: Fri Nov 8 04:10:32 2024 00:26:57.849 read: IOPS=2327, BW=18.2MiB/s (19.1MB/s)(90.9MiB/5002msec) 00:26:57.849 slat (nsec): min=5925, max=97956, avg=22041.54, stdev=12389.48 00:26:57.849 clat (usec): min=1803, max=7495, avg=3322.65, stdev=163.95 00:26:57.849 lat (usec): min=1813, max=7508, avg=3344.69, stdev=165.86 00:26:57.849 clat percentiles (usec): 00:26:57.849 | 1.00th=[ 3097], 5.00th=[ 3163], 10.00th=[ 3195], 20.00th=[ 3228], 00:26:57.849 | 30.00th=[ 3261], 40.00th=[ 3294], 50.00th=[ 3294], 60.00th=[ 3326], 00:26:57.849 | 70.00th=[ 3359], 80.00th=[ 3392], 90.00th=[ 3458], 95.00th=[ 3523], 00:26:57.849 | 99.00th=[ 3818], 99.50th=[ 4080], 99.90th=[ 5211], 99.95th=[ 5538], 00:26:57.849 | 99.99th=[ 5997] 00:26:57.849 bw ( KiB/s): min=18304, max=18832, per=24.95%, avg=18598.44, stdev=172.11, samples=9 00:26:57.849 iops : min= 2288, max= 2354, avg=2324.78, stdev=21.50, samples=9 00:26:57.849 lat (msec) : 2=0.03%, 4=99.42%, 10=0.55% 00:26:57.849 cpu : usr=95.16%, sys=3.46%, ctx=17, majf=0, minf=9 00:26:57.849 IO depths : 1=9.5%, 2=25.0%, 4=50.0%, 8=15.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:57.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.849 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.849 issued rwts: total=11640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.849 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:57.849 filename0: (groupid=0, jobs=1): err= 0: pid=92197: Fri Nov 8 04:10:32 2024 00:26:57.849 read: IOPS=2327, BW=18.2MiB/s (19.1MB/s)(90.9MiB/5002msec) 00:26:57.849 slat (nsec): min=6064, max=98420, avg=22586.03, stdev=12080.08 00:26:57.849 clat (usec): min=2635, max=6405, avg=3322.96, stdev=147.29 00:26:57.849 lat (usec): min=2645, max=6418, avg=3345.54, stdev=148.71 00:26:57.849 clat percentiles (usec): 00:26:57.849 | 1.00th=[ 3097], 5.00th=[ 3163], 10.00th=[ 3195], 20.00th=[ 3228], 00:26:57.849 | 30.00th=[ 3261], 40.00th=[ 3294], 50.00th=[ 3294], 60.00th=[ 3326], 00:26:57.849 | 70.00th=[ 3359], 80.00th=[ 3392], 90.00th=[ 3458], 95.00th=[ 3523], 00:26:57.849 | 99.00th=[ 3785], 99.50th=[ 3949], 99.90th=[ 4555], 99.95th=[ 5538], 00:26:57.849 | 99.99th=[ 5735] 00:26:57.849 bw ( KiB/s): min=18304, max=18816, per=24.96%, avg=18606.67, stdev=185.26, samples=9 00:26:57.849 iops : min= 2288, max= 2352, avg=2325.78, stdev=23.25, samples=9 00:26:57.849 lat (msec) : 4=99.56%, 10=0.44% 00:26:57.849 cpu : usr=94.92%, sys=3.80%, ctx=19, majf=0, minf=0 00:26:57.849 IO depths : 1=11.7%, 2=25.0%, 4=50.0%, 8=13.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:57.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.849 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.849 issued rwts: total=11640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.849 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:57.849 filename1: (groupid=0, jobs=1): err= 0: pid=92198: Fri Nov 8 04:10:32 2024 00:26:57.849 read: IOPS=2328, BW=18.2MiB/s (19.1MB/s)(91.0MiB/5003msec) 00:26:57.849 slat (nsec): min=6213, max=91509, avg=15396.75, stdev=9005.15 00:26:57.849 clat (usec): min=2153, max=5755, avg=3364.39, stdev=140.85 00:26:57.849 lat (usec): min=2160, max=5779, avg=3379.79, stdev=140.02 00:26:57.849 clat percentiles (usec): 00:26:57.849 | 1.00th=[ 3130], 5.00th=[ 3195], 10.00th=[ 3228], 20.00th=[ 3294], 00:26:57.849 | 30.00th=[ 3294], 40.00th=[ 3326], 50.00th=[ 3359], 60.00th=[ 3359], 00:26:57.849 | 70.00th=[ 3392], 80.00th=[ 3425], 90.00th=[ 3490], 95.00th=[ 3556], 00:26:57.849 | 99.00th=[ 3785], 99.50th=[ 3916], 99.90th=[ 4555], 99.95th=[ 5604], 00:26:57.849 | 99.99th=[ 5735] 00:26:57.849 bw ( KiB/s): min=18304, max=18816, per=24.97%, avg=18616.89, stdev=158.21, samples=9 00:26:57.849 iops : min= 2288, max= 2352, avg=2327.11, stdev=19.78, samples=9 00:26:57.849 lat (msec) : 4=99.56%, 10=0.44% 00:26:57.849 cpu : usr=95.20%, sys=3.30%, ctx=9, majf=0, minf=0 00:26:57.849 IO depths : 1=10.1%, 2=24.8%, 4=50.2%, 8=14.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:57.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.849 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.849 issued rwts: total=11648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.849 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:57.849 filename1: (groupid=0, jobs=1): err= 0: pid=92199: Fri Nov 8 04:10:32 2024 00:26:57.850 read: IOPS=2336, BW=18.3MiB/s (19.1MB/s)(91.3MiB/5003msec) 00:26:57.850 slat (nsec): min=6066, max=65057, avg=10802.68, stdev=6944.58 00:26:57.850 clat (usec): min=862, max=5653, avg=3372.60, stdev=197.36 00:26:57.850 lat (usec): min=869, max=5685, avg=3383.41, stdev=197.31 00:26:57.850 clat percentiles (usec): 00:26:57.850 | 1.00th=[ 3130], 5.00th=[ 3261], 10.00th=[ 3294], 20.00th=[ 3294], 00:26:57.850 | 30.00th=[ 3326], 40.00th=[ 3359], 50.00th=[ 3359], 60.00th=[ 3392], 00:26:57.850 | 70.00th=[ 3425], 80.00th=[ 3425], 90.00th=[ 3490], 95.00th=[ 3556], 00:26:57.850 | 99.00th=[ 3752], 99.50th=[ 4015], 99.90th=[ 4490], 99.95th=[ 5604], 00:26:57.850 | 99.99th=[ 5669] 00:26:57.850 bw ( KiB/s): min=18432, max=18816, per=25.08%, avg=18693.89, stdev=112.25, samples=9 00:26:57.850 iops : min= 2304, max= 2352, avg=2336.67, stdev=14.00, samples=9 00:26:57.850 lat (usec) : 1000=0.15% 00:26:57.850 lat (msec) : 2=0.31%, 4=99.02%, 10=0.51% 00:26:57.850 cpu : usr=95.54%, sys=3.32%, ctx=6, majf=0, minf=0 00:26:57.850 IO depths : 1=10.5%, 2=23.5%, 4=51.5%, 8=14.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:57.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.850 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.850 issued rwts: total=11688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.850 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:57.850 00:26:57.850 Run status group 0 (all jobs): 00:26:57.850 READ: bw=72.8MiB/s (76.3MB/s), 18.2MiB/s-18.3MiB/s (19.1MB/s-19.1MB/s), io=364MiB (382MB), run=5002-5003msec 00:26:58.108 04:10:33 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:58.108 04:10:33 -- target/dif.sh@43 -- # local sub 00:26:58.108 04:10:33 -- target/dif.sh@45 -- # for sub in "$@" 00:26:58.108 04:10:33 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:58.108 04:10:33 -- target/dif.sh@36 -- # local sub_id=0 00:26:58.109 04:10:33 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:58.109 04:10:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.109 04:10:33 -- common/autotest_common.sh@10 -- # set +x 00:26:58.109 04:10:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.109 04:10:33 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:58.109 04:10:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.109 04:10:33 -- common/autotest_common.sh@10 -- # set +x 00:26:58.109 04:10:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.109 04:10:33 -- target/dif.sh@45 -- # for sub in "$@" 00:26:58.109 04:10:33 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:58.109 04:10:33 -- target/dif.sh@36 -- # local sub_id=1 00:26:58.109 04:10:33 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:58.109 04:10:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.109 04:10:33 -- common/autotest_common.sh@10 -- # set +x 00:26:58.109 04:10:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.109 04:10:33 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:58.109 04:10:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.109 04:10:33 -- common/autotest_common.sh@10 -- # set +x 00:26:58.109 04:10:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.109 00:26:58.109 real 0m23.837s 00:26:58.109 user 2m8.011s 00:26:58.109 sys 0m3.618s 00:26:58.109 ************************************ 00:26:58.109 END TEST fio_dif_rand_params 00:26:58.109 ************************************ 00:26:58.109 04:10:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:58.109 04:10:33 -- common/autotest_common.sh@10 -- # set +x 00:26:58.109 04:10:33 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:58.109 04:10:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:58.109 04:10:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:58.109 04:10:33 -- common/autotest_common.sh@10 -- # set +x 00:26:58.367 ************************************ 00:26:58.367 START TEST fio_dif_digest 00:26:58.367 ************************************ 00:26:58.367 04:10:33 -- common/autotest_common.sh@1114 -- # fio_dif_digest 00:26:58.367 04:10:33 -- target/dif.sh@123 -- # local NULL_DIF 00:26:58.367 04:10:33 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:58.367 04:10:33 -- target/dif.sh@125 -- # local hdgst ddgst 00:26:58.367 04:10:33 -- target/dif.sh@127 -- # NULL_DIF=3 00:26:58.367 04:10:33 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:58.367 04:10:33 -- target/dif.sh@127 -- # numjobs=3 00:26:58.367 04:10:33 -- target/dif.sh@127 -- # iodepth=3 00:26:58.367 04:10:33 -- target/dif.sh@127 -- # runtime=10 00:26:58.367 04:10:33 -- target/dif.sh@128 -- # hdgst=true 00:26:58.367 04:10:33 -- target/dif.sh@128 -- # ddgst=true 00:26:58.367 04:10:33 -- target/dif.sh@130 -- # create_subsystems 0 00:26:58.367 04:10:33 -- target/dif.sh@28 -- # local sub 00:26:58.367 04:10:33 -- target/dif.sh@30 -- # for sub in "$@" 00:26:58.367 04:10:33 -- target/dif.sh@31 -- # create_subsystem 0 00:26:58.367 04:10:33 -- target/dif.sh@18 -- # local sub_id=0 00:26:58.368 04:10:33 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:58.368 04:10:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.368 04:10:33 -- common/autotest_common.sh@10 -- # set +x 00:26:58.368 bdev_null0 00:26:58.368 04:10:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.368 04:10:33 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:58.368 04:10:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.368 04:10:33 -- common/autotest_common.sh@10 -- # set +x 00:26:58.368 04:10:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.368 04:10:33 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:58.368 04:10:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.368 04:10:33 -- common/autotest_common.sh@10 -- # set +x 00:26:58.368 04:10:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.368 04:10:33 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:58.368 04:10:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.368 04:10:33 -- common/autotest_common.sh@10 -- # set +x 00:26:58.368 [2024-11-08 04:10:33.266307] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:58.368 04:10:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.368 04:10:33 -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:58.368 04:10:33 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:58.368 04:10:33 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:58.368 04:10:33 -- nvmf/common.sh@520 -- # config=() 00:26:58.368 04:10:33 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:58.368 04:10:33 -- nvmf/common.sh@520 -- # local subsystem config 00:26:58.368 04:10:33 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:58.368 04:10:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:58.368 04:10:33 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:58.368 04:10:33 -- target/dif.sh@82 -- # gen_fio_conf 00:26:58.368 04:10:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:58.368 { 00:26:58.368 "params": { 00:26:58.368 "name": "Nvme$subsystem", 00:26:58.368 "trtype": "$TEST_TRANSPORT", 00:26:58.368 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:58.368 "adrfam": "ipv4", 00:26:58.368 "trsvcid": "$NVMF_PORT", 00:26:58.368 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:58.368 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:58.368 "hdgst": ${hdgst:-false}, 00:26:58.368 "ddgst": ${ddgst:-false} 00:26:58.368 }, 00:26:58.368 "method": "bdev_nvme_attach_controller" 00:26:58.368 } 00:26:58.368 EOF 00:26:58.368 )") 00:26:58.368 04:10:33 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:58.368 04:10:33 -- target/dif.sh@54 -- # local file 00:26:58.368 04:10:33 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:58.368 04:10:33 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:58.368 04:10:33 -- target/dif.sh@56 -- # cat 00:26:58.368 04:10:33 -- common/autotest_common.sh@1330 -- # shift 00:26:58.368 04:10:33 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:58.368 04:10:33 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:58.368 04:10:33 -- nvmf/common.sh@542 -- # cat 00:26:58.368 04:10:33 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:58.368 04:10:33 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:58.368 04:10:33 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:58.368 04:10:33 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:58.368 04:10:33 -- target/dif.sh@72 -- # (( file <= files )) 00:26:58.368 04:10:33 -- nvmf/common.sh@544 -- # jq . 00:26:58.368 04:10:33 -- nvmf/common.sh@545 -- # IFS=, 00:26:58.368 04:10:33 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:58.368 "params": { 00:26:58.368 "name": "Nvme0", 00:26:58.368 "trtype": "tcp", 00:26:58.368 "traddr": "10.0.0.2", 00:26:58.368 "adrfam": "ipv4", 00:26:58.368 "trsvcid": "4420", 00:26:58.368 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:58.368 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:58.368 "hdgst": true, 00:26:58.368 "ddgst": true 00:26:58.368 }, 00:26:58.368 "method": "bdev_nvme_attach_controller" 00:26:58.368 }' 00:26:58.368 04:10:33 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:58.368 04:10:33 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:58.368 04:10:33 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:58.368 04:10:33 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:58.368 04:10:33 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:58.368 04:10:33 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:58.368 04:10:33 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:58.368 04:10:33 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:58.368 04:10:33 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:58.368 04:10:33 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:58.368 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:58.368 ... 00:26:58.368 fio-3.35 00:26:58.368 Starting 3 threads 00:26:58.935 [2024-11-08 04:10:33.889350] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:58.935 [2024-11-08 04:10:33.889444] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:27:11.139 00:27:11.139 filename0: (groupid=0, jobs=1): err= 0: pid=92307: Fri Nov 8 04:10:44 2024 00:27:11.139 read: IOPS=222, BW=27.8MiB/s (29.1MB/s)(279MiB/10046msec) 00:27:11.139 slat (usec): min=6, max=288, avg=15.03, stdev= 9.75 00:27:11.139 clat (usec): min=8105, max=50366, avg=13452.95, stdev=1822.26 00:27:11.139 lat (usec): min=8114, max=50377, avg=13467.98, stdev=1823.21 00:27:11.139 clat percentiles (usec): 00:27:11.139 | 1.00th=[ 8586], 5.00th=[ 9241], 10.00th=[12125], 20.00th=[13042], 00:27:11.139 | 30.00th=[13304], 40.00th=[13435], 50.00th=[13698], 60.00th=[13829], 00:27:11.139 | 70.00th=[14091], 80.00th=[14353], 90.00th=[14746], 95.00th=[15139], 00:27:11.139 | 99.00th=[15926], 99.50th=[16188], 99.90th=[17957], 99.95th=[46400], 00:27:11.139 | 99.99th=[50594] 00:27:11.139 bw ( KiB/s): min=26112, max=30464, per=29.47%, avg=28456.42, stdev=955.06, samples=19 00:27:11.139 iops : min= 204, max= 238, avg=222.32, stdev= 7.46, samples=19 00:27:11.139 lat (msec) : 10=7.39%, 20=92.52%, 50=0.04%, 100=0.04% 00:27:11.139 cpu : usr=93.88%, sys=4.49%, ctx=159, majf=0, minf=9 00:27:11.139 IO depths : 1=1.3%, 2=98.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:11.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.139 issued rwts: total=2234,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.139 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:11.139 filename0: (groupid=0, jobs=1): err= 0: pid=92308: Fri Nov 8 04:10:44 2024 00:27:11.139 read: IOPS=259, BW=32.5MiB/s (34.1MB/s)(325MiB/10005msec) 00:27:11.139 slat (nsec): min=6273, max=54776, avg=13779.27, stdev=6149.50 00:27:11.139 clat (usec): min=5164, max=17056, avg=11520.44, stdev=1614.51 00:27:11.139 lat (usec): min=5184, max=17063, avg=11534.21, stdev=1615.40 00:27:11.139 clat percentiles (usec): 00:27:11.139 | 1.00th=[ 6718], 5.00th=[ 7373], 10.00th=[ 9503], 20.00th=[10814], 00:27:11.139 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11863], 60.00th=[12125], 00:27:11.139 | 70.00th=[12387], 80.00th=[12649], 90.00th=[13042], 95.00th=[13435], 00:27:11.139 | 99.00th=[14091], 99.50th=[14353], 99.90th=[15533], 99.95th=[16057], 00:27:11.139 | 99.99th=[17171] 00:27:11.139 bw ( KiB/s): min=30976, max=35328, per=34.30%, avg=33121.84, stdev=1224.56, samples=19 00:27:11.139 iops : min= 242, max= 276, avg=258.74, stdev= 9.55, samples=19 00:27:11.139 lat (msec) : 10=11.03%, 20=88.97% 00:27:11.139 cpu : usr=94.06%, sys=4.40%, ctx=10, majf=0, minf=9 00:27:11.139 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:11.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.139 issued rwts: total=2601,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.139 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:11.139 filename0: (groupid=0, jobs=1): err= 0: pid=92309: Fri Nov 8 04:10:44 2024 00:27:11.139 read: IOPS=274, BW=34.3MiB/s (35.9MB/s)(343MiB/10006msec) 00:27:11.139 slat (nsec): min=6400, max=72371, avg=17344.27, stdev=7034.30 00:27:11.139 clat (usec): min=5783, max=53744, avg=10919.73, stdev=5230.68 00:27:11.139 lat (usec): min=5803, max=53763, avg=10937.08, stdev=5230.86 00:27:11.139 clat percentiles (usec): 00:27:11.139 | 1.00th=[ 8455], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9634], 00:27:11.139 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:27:11.139 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11207], 95.00th=[11600], 00:27:11.139 | 99.00th=[50594], 99.50th=[51643], 99.90th=[52167], 99.95th=[52167], 00:27:11.139 | 99.99th=[53740] 00:27:11.139 bw ( KiB/s): min=30976, max=38400, per=36.55%, avg=35287.58, stdev=2160.91, samples=19 00:27:11.139 iops : min= 242, max= 300, avg=275.68, stdev=16.88, samples=19 00:27:11.139 lat (msec) : 10=35.29%, 20=63.07%, 50=0.07%, 100=1.57% 00:27:11.139 cpu : usr=92.62%, sys=5.51%, ctx=17, majf=0, minf=9 00:27:11.139 IO depths : 1=2.1%, 2=97.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:11.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.139 issued rwts: total=2743,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.139 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:11.139 00:27:11.139 Run status group 0 (all jobs): 00:27:11.139 READ: bw=94.3MiB/s (98.9MB/s), 27.8MiB/s-34.3MiB/s (29.1MB/s-35.9MB/s), io=947MiB (993MB), run=10005-10046msec 00:27:11.139 04:10:44 -- target/dif.sh@132 -- # destroy_subsystems 0 00:27:11.139 04:10:44 -- target/dif.sh@43 -- # local sub 00:27:11.139 04:10:44 -- target/dif.sh@45 -- # for sub in "$@" 00:27:11.139 04:10:44 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:11.139 04:10:44 -- target/dif.sh@36 -- # local sub_id=0 00:27:11.139 04:10:44 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:11.139 04:10:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.139 04:10:44 -- common/autotest_common.sh@10 -- # set +x 00:27:11.139 04:10:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.139 04:10:44 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:11.139 04:10:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.139 04:10:44 -- common/autotest_common.sh@10 -- # set +x 00:27:11.139 04:10:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.139 00:27:11.139 real 0m11.139s 00:27:11.139 user 0m28.843s 00:27:11.139 sys 0m1.730s 00:27:11.139 04:10:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:11.139 04:10:44 -- common/autotest_common.sh@10 -- # set +x 00:27:11.139 ************************************ 00:27:11.139 END TEST fio_dif_digest 00:27:11.139 ************************************ 00:27:11.139 04:10:44 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:27:11.139 04:10:44 -- target/dif.sh@147 -- # nvmftestfini 00:27:11.139 04:10:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:11.139 04:10:44 -- nvmf/common.sh@116 -- # sync 00:27:11.139 04:10:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:11.139 04:10:44 -- nvmf/common.sh@119 -- # set +e 00:27:11.139 04:10:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:11.139 04:10:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:11.139 rmmod nvme_tcp 00:27:11.139 rmmod nvme_fabrics 00:27:11.139 rmmod nvme_keyring 00:27:11.139 04:10:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:11.139 04:10:44 -- nvmf/common.sh@123 -- # set -e 00:27:11.139 04:10:44 -- nvmf/common.sh@124 -- # return 0 00:27:11.139 04:10:44 -- nvmf/common.sh@477 -- # '[' -n 91534 ']' 00:27:11.139 04:10:44 -- nvmf/common.sh@478 -- # killprocess 91534 00:27:11.139 04:10:44 -- common/autotest_common.sh@936 -- # '[' -z 91534 ']' 00:27:11.139 04:10:44 -- common/autotest_common.sh@940 -- # kill -0 91534 00:27:11.139 04:10:44 -- common/autotest_common.sh@941 -- # uname 00:27:11.139 04:10:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:11.139 04:10:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91534 00:27:11.139 04:10:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:11.139 killing process with pid 91534 00:27:11.139 04:10:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:11.139 04:10:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91534' 00:27:11.139 04:10:44 -- common/autotest_common.sh@955 -- # kill 91534 00:27:11.139 04:10:44 -- common/autotest_common.sh@960 -- # wait 91534 00:27:11.139 04:10:44 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:27:11.139 04:10:44 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:11.139 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:11.139 Waiting for block devices as requested 00:27:11.139 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:27:11.139 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:27:11.139 04:10:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:11.139 04:10:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:11.139 04:10:45 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:11.139 04:10:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:11.139 04:10:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:11.139 04:10:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:11.139 04:10:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.139 04:10:45 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:27:11.139 00:27:11.139 real 1m0.634s 00:27:11.139 user 3m52.614s 00:27:11.139 sys 0m14.322s 00:27:11.140 04:10:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:11.140 04:10:45 -- common/autotest_common.sh@10 -- # set +x 00:27:11.140 ************************************ 00:27:11.140 END TEST nvmf_dif 00:27:11.140 ************************************ 00:27:11.140 04:10:45 -- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:11.140 04:10:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:11.140 04:10:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:11.140 04:10:45 -- common/autotest_common.sh@10 -- # set +x 00:27:11.140 ************************************ 00:27:11.140 START TEST nvmf_abort_qd_sizes 00:27:11.140 ************************************ 00:27:11.140 04:10:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:11.140 * Looking for test storage... 00:27:11.140 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:11.140 04:10:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:11.140 04:10:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:11.140 04:10:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:11.140 04:10:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:11.140 04:10:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:11.140 04:10:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:11.140 04:10:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:11.140 04:10:45 -- scripts/common.sh@335 -- # IFS=.-: 00:27:11.140 04:10:45 -- scripts/common.sh@335 -- # read -ra ver1 00:27:11.140 04:10:45 -- scripts/common.sh@336 -- # IFS=.-: 00:27:11.140 04:10:45 -- scripts/common.sh@336 -- # read -ra ver2 00:27:11.140 04:10:45 -- scripts/common.sh@337 -- # local 'op=<' 00:27:11.140 04:10:45 -- scripts/common.sh@339 -- # ver1_l=2 00:27:11.140 04:10:45 -- scripts/common.sh@340 -- # ver2_l=1 00:27:11.140 04:10:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:11.140 04:10:45 -- scripts/common.sh@343 -- # case "$op" in 00:27:11.140 04:10:45 -- scripts/common.sh@344 -- # : 1 00:27:11.140 04:10:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:11.140 04:10:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:11.140 04:10:45 -- scripts/common.sh@364 -- # decimal 1 00:27:11.140 04:10:45 -- scripts/common.sh@352 -- # local d=1 00:27:11.140 04:10:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:11.140 04:10:45 -- scripts/common.sh@354 -- # echo 1 00:27:11.140 04:10:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:11.140 04:10:45 -- scripts/common.sh@365 -- # decimal 2 00:27:11.140 04:10:45 -- scripts/common.sh@352 -- # local d=2 00:27:11.140 04:10:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:11.140 04:10:45 -- scripts/common.sh@354 -- # echo 2 00:27:11.140 04:10:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:11.140 04:10:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:11.140 04:10:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:11.140 04:10:45 -- scripts/common.sh@367 -- # return 0 00:27:11.140 04:10:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:11.140 04:10:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:11.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.140 --rc genhtml_branch_coverage=1 00:27:11.140 --rc genhtml_function_coverage=1 00:27:11.140 --rc genhtml_legend=1 00:27:11.140 --rc geninfo_all_blocks=1 00:27:11.140 --rc geninfo_unexecuted_blocks=1 00:27:11.140 00:27:11.140 ' 00:27:11.140 04:10:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:11.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.140 --rc genhtml_branch_coverage=1 00:27:11.140 --rc genhtml_function_coverage=1 00:27:11.140 --rc genhtml_legend=1 00:27:11.140 --rc geninfo_all_blocks=1 00:27:11.140 --rc geninfo_unexecuted_blocks=1 00:27:11.140 00:27:11.140 ' 00:27:11.140 04:10:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:11.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.140 --rc genhtml_branch_coverage=1 00:27:11.140 --rc genhtml_function_coverage=1 00:27:11.140 --rc genhtml_legend=1 00:27:11.140 --rc geninfo_all_blocks=1 00:27:11.140 --rc geninfo_unexecuted_blocks=1 00:27:11.140 00:27:11.140 ' 00:27:11.140 04:10:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:11.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.140 --rc genhtml_branch_coverage=1 00:27:11.140 --rc genhtml_function_coverage=1 00:27:11.140 --rc genhtml_legend=1 00:27:11.140 --rc geninfo_all_blocks=1 00:27:11.140 --rc geninfo_unexecuted_blocks=1 00:27:11.140 00:27:11.140 ' 00:27:11.140 04:10:45 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:11.140 04:10:45 -- nvmf/common.sh@7 -- # uname -s 00:27:11.140 04:10:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:11.140 04:10:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:11.140 04:10:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:11.140 04:10:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:11.140 04:10:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:11.140 04:10:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:11.140 04:10:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:11.140 04:10:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:11.140 04:10:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:11.140 04:10:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:11.140 04:10:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:27:11.140 04:10:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 00:27:11.140 04:10:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:11.140 04:10:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:11.140 04:10:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:11.140 04:10:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:11.140 04:10:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:11.140 04:10:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:11.140 04:10:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:11.140 04:10:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.140 04:10:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.140 04:10:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.140 04:10:45 -- paths/export.sh@5 -- # export PATH 00:27:11.140 04:10:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.140 04:10:45 -- nvmf/common.sh@46 -- # : 0 00:27:11.140 04:10:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:11.140 04:10:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:11.140 04:10:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:11.140 04:10:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:11.140 04:10:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:11.140 04:10:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:11.140 04:10:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:11.140 04:10:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:11.140 04:10:45 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:27:11.140 04:10:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:11.140 04:10:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:11.140 04:10:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:11.140 04:10:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:11.140 04:10:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:11.140 04:10:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:11.140 04:10:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:11.140 04:10:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.140 04:10:45 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:27:11.140 04:10:45 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:27:11.140 04:10:45 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:27:11.140 04:10:45 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:27:11.140 04:10:45 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:27:11.140 04:10:45 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:27:11.140 04:10:45 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:11.140 04:10:45 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:11.140 04:10:45 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:11.140 04:10:45 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:27:11.140 04:10:45 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:11.140 04:10:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:11.140 04:10:45 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:11.140 04:10:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:11.140 04:10:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:11.140 04:10:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:11.140 04:10:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:11.140 04:10:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:11.140 04:10:45 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:27:11.140 04:10:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:27:11.140 Cannot find device "nvmf_tgt_br" 00:27:11.140 04:10:45 -- nvmf/common.sh@154 -- # true 00:27:11.140 04:10:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:27:11.140 Cannot find device "nvmf_tgt_br2" 00:27:11.140 04:10:45 -- nvmf/common.sh@155 -- # true 00:27:11.140 04:10:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:27:11.140 04:10:45 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:27:11.140 Cannot find device "nvmf_tgt_br" 00:27:11.140 04:10:45 -- nvmf/common.sh@157 -- # true 00:27:11.140 04:10:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:27:11.140 Cannot find device "nvmf_tgt_br2" 00:27:11.140 04:10:45 -- nvmf/common.sh@158 -- # true 00:27:11.140 04:10:45 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:27:11.140 04:10:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:27:11.140 04:10:45 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:11.140 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:11.140 04:10:45 -- nvmf/common.sh@161 -- # true 00:27:11.140 04:10:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:11.140 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:11.140 04:10:45 -- nvmf/common.sh@162 -- # true 00:27:11.140 04:10:45 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:27:11.140 04:10:45 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:11.140 04:10:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:11.140 04:10:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:11.140 04:10:45 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:11.140 04:10:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:11.140 04:10:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:11.140 04:10:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:11.140 04:10:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:11.140 04:10:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:27:11.140 04:10:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:27:11.140 04:10:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:27:11.140 04:10:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:27:11.140 04:10:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:11.140 04:10:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:11.140 04:10:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:11.140 04:10:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:27:11.140 04:10:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:27:11.140 04:10:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:27:11.140 04:10:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:11.140 04:10:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:11.140 04:10:46 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:11.140 04:10:46 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:11.140 04:10:46 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:27:11.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:11.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:27:11.140 00:27:11.140 --- 10.0.0.2 ping statistics --- 00:27:11.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.140 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:27:11.140 04:10:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:27:11.140 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:11.140 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:27:11.140 00:27:11.140 --- 10.0.0.3 ping statistics --- 00:27:11.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.140 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:27:11.140 04:10:46 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:11.140 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:11.140 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:27:11.140 00:27:11.140 --- 10.0.0.1 ping statistics --- 00:27:11.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.140 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:27:11.140 04:10:46 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:11.140 04:10:46 -- nvmf/common.sh@421 -- # return 0 00:27:11.140 04:10:46 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:27:11.140 04:10:46 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:11.707 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:11.966 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:27:11.966 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:27:11.966 04:10:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:11.966 04:10:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:11.966 04:10:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:11.966 04:10:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:11.966 04:10:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:11.966 04:10:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:11.966 04:10:47 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:27:11.966 04:10:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:11.967 04:10:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:11.967 04:10:47 -- common/autotest_common.sh@10 -- # set +x 00:27:11.967 04:10:47 -- nvmf/common.sh@469 -- # nvmfpid=92914 00:27:11.967 04:10:47 -- nvmf/common.sh@470 -- # waitforlisten 92914 00:27:11.967 04:10:47 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:27:11.967 04:10:47 -- common/autotest_common.sh@829 -- # '[' -z 92914 ']' 00:27:11.967 04:10:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:11.967 04:10:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:11.967 04:10:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:11.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:11.967 04:10:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:11.967 04:10:47 -- common/autotest_common.sh@10 -- # set +x 00:27:11.967 [2024-11-08 04:10:47.073499] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:11.967 [2024-11-08 04:10:47.073600] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:12.225 [2024-11-08 04:10:47.214006] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:12.225 [2024-11-08 04:10:47.332834] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:12.225 [2024-11-08 04:10:47.333334] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:12.225 [2024-11-08 04:10:47.333513] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:12.225 [2024-11-08 04:10:47.333762] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:12.484 [2024-11-08 04:10:47.334197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:12.484 [2024-11-08 04:10:47.334346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:12.484 [2024-11-08 04:10:47.334586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:12.484 [2024-11-08 04:10:47.334595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.051 04:10:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:13.051 04:10:48 -- common/autotest_common.sh@862 -- # return 0 00:27:13.051 04:10:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:13.051 04:10:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:13.051 04:10:48 -- common/autotest_common.sh@10 -- # set +x 00:27:13.051 04:10:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:13.051 04:10:48 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:27:13.051 04:10:48 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:27:13.051 04:10:48 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:27:13.051 04:10:48 -- scripts/common.sh@311 -- # local bdf bdfs 00:27:13.051 04:10:48 -- scripts/common.sh@312 -- # local nvmes 00:27:13.051 04:10:48 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:27:13.051 04:10:48 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:27:13.051 04:10:48 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:27:13.051 04:10:48 -- scripts/common.sh@297 -- # local bdf= 00:27:13.051 04:10:48 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:27:13.051 04:10:48 -- scripts/common.sh@232 -- # local class 00:27:13.051 04:10:48 -- scripts/common.sh@233 -- # local subclass 00:27:13.051 04:10:48 -- scripts/common.sh@234 -- # local progif 00:27:13.051 04:10:48 -- scripts/common.sh@235 -- # printf %02x 1 00:27:13.051 04:10:48 -- scripts/common.sh@235 -- # class=01 00:27:13.051 04:10:48 -- scripts/common.sh@236 -- # printf %02x 8 00:27:13.051 04:10:48 -- scripts/common.sh@236 -- # subclass=08 00:27:13.051 04:10:48 -- scripts/common.sh@237 -- # printf %02x 2 00:27:13.051 04:10:48 -- scripts/common.sh@237 -- # progif=02 00:27:13.051 04:10:48 -- scripts/common.sh@239 -- # hash lspci 00:27:13.051 04:10:48 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:27:13.051 04:10:48 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:27:13.051 04:10:48 -- scripts/common.sh@242 -- # grep -i -- -p02 00:27:13.051 04:10:48 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:27:13.051 04:10:48 -- scripts/common.sh@244 -- # tr -d '"' 00:27:13.311 04:10:48 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:27:13.311 04:10:48 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:27:13.311 04:10:48 -- scripts/common.sh@15 -- # local i 00:27:13.311 04:10:48 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:27:13.311 04:10:48 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:27:13.311 04:10:48 -- scripts/common.sh@24 -- # return 0 00:27:13.311 04:10:48 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:27:13.311 04:10:48 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:27:13.311 04:10:48 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:27:13.311 04:10:48 -- scripts/common.sh@15 -- # local i 00:27:13.311 04:10:48 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:27:13.311 04:10:48 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:27:13.311 04:10:48 -- scripts/common.sh@24 -- # return 0 00:27:13.311 04:10:48 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:27:13.311 04:10:48 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:27:13.311 04:10:48 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:27:13.311 04:10:48 -- scripts/common.sh@322 -- # uname -s 00:27:13.311 04:10:48 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:27:13.311 04:10:48 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:27:13.311 04:10:48 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:27:13.311 04:10:48 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:27:13.311 04:10:48 -- scripts/common.sh@322 -- # uname -s 00:27:13.311 04:10:48 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:27:13.311 04:10:48 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:27:13.311 04:10:48 -- scripts/common.sh@327 -- # (( 2 )) 00:27:13.311 04:10:48 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:27:13.311 04:10:48 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:27:13.311 04:10:48 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:27:13.311 04:10:48 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:27:13.311 04:10:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:13.311 04:10:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:13.311 04:10:48 -- common/autotest_common.sh@10 -- # set +x 00:27:13.311 ************************************ 00:27:13.311 START TEST spdk_target_abort 00:27:13.311 ************************************ 00:27:13.311 04:10:48 -- common/autotest_common.sh@1114 -- # spdk_target 00:27:13.311 04:10:48 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:27:13.311 04:10:48 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:27:13.311 04:10:48 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:27:13.311 04:10:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.311 04:10:48 -- common/autotest_common.sh@10 -- # set +x 00:27:13.311 spdk_targetn1 00:27:13.311 04:10:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.311 04:10:48 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:13.311 04:10:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.311 04:10:48 -- common/autotest_common.sh@10 -- # set +x 00:27:13.311 [2024-11-08 04:10:48.274962] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:13.311 04:10:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.311 04:10:48 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:27:13.311 04:10:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.311 04:10:48 -- common/autotest_common.sh@10 -- # set +x 00:27:13.311 04:10:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.311 04:10:48 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:27:13.311 04:10:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.311 04:10:48 -- common/autotest_common.sh@10 -- # set +x 00:27:13.311 04:10:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.311 04:10:48 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:27:13.311 04:10:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.311 04:10:48 -- common/autotest_common.sh@10 -- # set +x 00:27:13.311 [2024-11-08 04:10:48.303137] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:13.311 04:10:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.311 04:10:48 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:27:13.311 04:10:48 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:13.311 04:10:48 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:13.311 04:10:48 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:27:13.311 04:10:48 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:13.311 04:10:48 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:27:13.311 04:10:48 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:13.311 04:10:48 -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:13.311 04:10:48 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:13.311 04:10:48 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:13.311 04:10:48 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:13.311 04:10:48 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:13.311 04:10:48 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:13.311 04:10:48 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:13.311 04:10:48 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:27:13.311 04:10:48 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:13.311 04:10:48 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:13.311 04:10:48 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:13.311 04:10:48 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:27:13.311 04:10:48 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:13.311 04:10:48 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:27:16.603 Initializing NVMe Controllers 00:27:16.603 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:27:16.603 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:27:16.603 Initialization complete. Launching workers. 00:27:16.603 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10460, failed: 0 00:27:16.603 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1162, failed to submit 9298 00:27:16.603 success 747, unsuccess 415, failed 0 00:27:16.603 04:10:51 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:16.603 04:10:51 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:27:19.890 Initializing NVMe Controllers 00:27:19.890 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:27:19.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:27:19.890 Initialization complete. Launching workers. 00:27:19.890 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 5999, failed: 0 00:27:19.890 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1249, failed to submit 4750 00:27:19.890 success 243, unsuccess 1006, failed 0 00:27:19.890 04:10:54 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:19.890 04:10:54 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:27:23.215 Initializing NVMe Controllers 00:27:23.215 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:27:23.215 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:27:23.215 Initialization complete. Launching workers. 00:27:23.215 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 31709, failed: 0 00:27:23.215 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2700, failed to submit 29009 00:27:23.215 success 429, unsuccess 2271, failed 0 00:27:23.215 04:10:58 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:27:23.215 04:10:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.215 04:10:58 -- common/autotest_common.sh@10 -- # set +x 00:27:23.215 04:10:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.215 04:10:58 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:27:23.215 04:10:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.215 04:10:58 -- common/autotest_common.sh@10 -- # set +x 00:27:23.472 04:10:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.472 04:10:58 -- target/abort_qd_sizes.sh@62 -- # killprocess 92914 00:27:23.472 04:10:58 -- common/autotest_common.sh@936 -- # '[' -z 92914 ']' 00:27:23.472 04:10:58 -- common/autotest_common.sh@940 -- # kill -0 92914 00:27:23.472 04:10:58 -- common/autotest_common.sh@941 -- # uname 00:27:23.472 04:10:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:23.472 04:10:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92914 00:27:23.472 killing process with pid 92914 00:27:23.472 04:10:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:23.472 04:10:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:23.472 04:10:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92914' 00:27:23.472 04:10:58 -- common/autotest_common.sh@955 -- # kill 92914 00:27:23.472 04:10:58 -- common/autotest_common.sh@960 -- # wait 92914 00:27:24.037 ************************************ 00:27:24.037 END TEST spdk_target_abort 00:27:24.037 ************************************ 00:27:24.037 00:27:24.037 real 0m10.667s 00:27:24.037 user 0m43.197s 00:27:24.037 sys 0m1.770s 00:27:24.037 04:10:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:24.037 04:10:58 -- common/autotest_common.sh@10 -- # set +x 00:27:24.037 04:10:58 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:27:24.037 04:10:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:24.037 04:10:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:24.037 04:10:58 -- common/autotest_common.sh@10 -- # set +x 00:27:24.037 ************************************ 00:27:24.037 START TEST kernel_target_abort 00:27:24.037 ************************************ 00:27:24.037 04:10:58 -- common/autotest_common.sh@1114 -- # kernel_target 00:27:24.037 04:10:58 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:27:24.038 04:10:58 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:27:24.038 04:10:58 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:27:24.038 04:10:58 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:27:24.038 04:10:58 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:27:24.038 04:10:58 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:27:24.038 04:10:58 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:24.038 04:10:58 -- nvmf/common.sh@627 -- # local block nvme 00:27:24.038 04:10:58 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:27:24.038 04:10:58 -- nvmf/common.sh@630 -- # modprobe nvmet 00:27:24.038 04:10:58 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:24.038 04:10:58 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:24.296 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:24.296 Waiting for block devices as requested 00:27:24.296 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:27:24.554 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:27:24.554 04:10:59 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:27:24.554 04:10:59 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:24.554 04:10:59 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:27:24.554 04:10:59 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:27:24.554 04:10:59 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:27:24.554 No valid GPT data, bailing 00:27:24.554 04:10:59 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:24.554 04:10:59 -- scripts/common.sh@393 -- # pt= 00:27:24.554 04:10:59 -- scripts/common.sh@394 -- # return 1 00:27:24.554 04:10:59 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:27:24.555 04:10:59 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:27:24.555 04:10:59 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:27:24.555 04:10:59 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:27:24.555 04:10:59 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:27:24.555 04:10:59 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:27:24.813 No valid GPT data, bailing 00:27:24.813 04:10:59 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:27:24.813 04:10:59 -- scripts/common.sh@393 -- # pt= 00:27:24.813 04:10:59 -- scripts/common.sh@394 -- # return 1 00:27:24.813 04:10:59 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:27:24.813 04:10:59 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:27:24.813 04:10:59 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:27:24.813 04:10:59 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:27:24.813 04:10:59 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:27:24.813 04:10:59 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:27:24.813 No valid GPT data, bailing 00:27:24.813 04:10:59 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:27:24.813 04:10:59 -- scripts/common.sh@393 -- # pt= 00:27:24.813 04:10:59 -- scripts/common.sh@394 -- # return 1 00:27:24.813 04:10:59 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:27:24.813 04:10:59 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:27:24.813 04:10:59 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:27:24.813 04:10:59 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:27:24.813 04:10:59 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:27:24.813 04:10:59 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:27:24.813 No valid GPT data, bailing 00:27:24.813 04:10:59 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:27:24.813 04:10:59 -- scripts/common.sh@393 -- # pt= 00:27:24.813 04:10:59 -- scripts/common.sh@394 -- # return 1 00:27:24.813 04:10:59 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:27:24.813 04:10:59 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:27:24.813 04:10:59 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:27:24.813 04:10:59 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:27:24.813 04:10:59 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:24.813 04:10:59 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:27:24.813 04:10:59 -- nvmf/common.sh@654 -- # echo 1 00:27:24.813 04:10:59 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:27:24.813 04:10:59 -- nvmf/common.sh@656 -- # echo 1 00:27:24.813 04:10:59 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:27:24.813 04:10:59 -- nvmf/common.sh@663 -- # echo tcp 00:27:24.813 04:10:59 -- nvmf/common.sh@664 -- # echo 4420 00:27:24.813 04:10:59 -- nvmf/common.sh@665 -- # echo ipv4 00:27:24.813 04:10:59 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:24.813 04:10:59 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bcb05152-0cc3-4ff8-8903-5bb8968d2c01 --hostid=bcb05152-0cc3-4ff8-8903-5bb8968d2c01 -a 10.0.0.1 -t tcp -s 4420 00:27:24.813 00:27:24.813 Discovery Log Number of Records 2, Generation counter 2 00:27:24.813 =====Discovery Log Entry 0====== 00:27:24.813 trtype: tcp 00:27:24.813 adrfam: ipv4 00:27:24.813 subtype: current discovery subsystem 00:27:24.813 treq: not specified, sq flow control disable supported 00:27:24.813 portid: 1 00:27:24.813 trsvcid: 4420 00:27:24.813 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:24.813 traddr: 10.0.0.1 00:27:24.813 eflags: none 00:27:24.813 sectype: none 00:27:24.813 =====Discovery Log Entry 1====== 00:27:24.813 trtype: tcp 00:27:24.813 adrfam: ipv4 00:27:24.813 subtype: nvme subsystem 00:27:24.813 treq: not specified, sq flow control disable supported 00:27:24.813 portid: 1 00:27:24.813 trsvcid: 4420 00:27:24.813 subnqn: kernel_target 00:27:24.813 traddr: 10.0.0.1 00:27:24.813 eflags: none 00:27:24.813 sectype: none 00:27:24.813 04:10:59 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:27:24.813 04:10:59 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:24.813 04:10:59 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:24.813 04:10:59 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:27:24.813 04:10:59 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:24.813 04:10:59 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:27:24.813 04:10:59 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:24.813 04:10:59 -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:24.813 04:10:59 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:24.813 04:10:59 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:24.813 04:10:59 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:24.813 04:10:59 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:24.813 04:10:59 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:24.813 04:10:59 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:24.813 04:10:59 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:27:24.813 04:10:59 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:24.813 04:10:59 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:27:24.813 04:10:59 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:24.813 04:10:59 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:24.813 04:10:59 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:24.813 04:10:59 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:28.195 Initializing NVMe Controllers 00:27:28.195 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:27:28.195 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:27:28.195 Initialization complete. Launching workers. 00:27:28.195 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 36233, failed: 0 00:27:28.195 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 36233, failed to submit 0 00:27:28.195 success 0, unsuccess 36233, failed 0 00:27:28.195 04:11:03 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:28.195 04:11:03 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:31.481 Initializing NVMe Controllers 00:27:31.481 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:27:31.481 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:27:31.481 Initialization complete. Launching workers. 00:27:31.481 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 84639, failed: 0 00:27:31.481 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 35867, failed to submit 48772 00:27:31.481 success 0, unsuccess 35867, failed 0 00:27:31.481 04:11:06 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:31.481 04:11:06 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:34.763 Initializing NVMe Controllers 00:27:34.763 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:27:34.763 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:27:34.763 Initialization complete. Launching workers. 00:27:34.763 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 104131, failed: 0 00:27:34.763 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 26018, failed to submit 78113 00:27:34.763 success 0, unsuccess 26018, failed 0 00:27:34.763 04:11:09 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:27:34.763 04:11:09 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:27:34.763 04:11:09 -- nvmf/common.sh@677 -- # echo 0 00:27:34.763 04:11:09 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:27:34.763 04:11:09 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:27:34.763 04:11:09 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:34.763 04:11:09 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:27:34.763 04:11:09 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:27:34.763 04:11:09 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:27:34.763 ************************************ 00:27:34.763 END TEST kernel_target_abort 00:27:34.763 ************************************ 00:27:34.763 00:27:34.763 real 0m10.571s 00:27:34.763 user 0m5.529s 00:27:34.763 sys 0m2.226s 00:27:34.763 04:11:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:34.763 04:11:09 -- common/autotest_common.sh@10 -- # set +x 00:27:34.763 04:11:09 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:27:34.763 04:11:09 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:27:34.763 04:11:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:34.763 04:11:09 -- nvmf/common.sh@116 -- # sync 00:27:34.763 04:11:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:34.763 04:11:09 -- nvmf/common.sh@119 -- # set +e 00:27:34.763 04:11:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:34.763 04:11:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:34.763 rmmod nvme_tcp 00:27:34.763 rmmod nvme_fabrics 00:27:34.763 rmmod nvme_keyring 00:27:34.763 04:11:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:34.763 04:11:09 -- nvmf/common.sh@123 -- # set -e 00:27:34.763 04:11:09 -- nvmf/common.sh@124 -- # return 0 00:27:34.763 04:11:09 -- nvmf/common.sh@477 -- # '[' -n 92914 ']' 00:27:34.763 04:11:09 -- nvmf/common.sh@478 -- # killprocess 92914 00:27:34.763 04:11:09 -- common/autotest_common.sh@936 -- # '[' -z 92914 ']' 00:27:34.763 04:11:09 -- common/autotest_common.sh@940 -- # kill -0 92914 00:27:34.763 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (92914) - No such process 00:27:34.763 Process with pid 92914 is not found 00:27:34.763 04:11:09 -- common/autotest_common.sh@963 -- # echo 'Process with pid 92914 is not found' 00:27:34.763 04:11:09 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:27:34.763 04:11:09 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:35.332 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:35.332 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:27:35.332 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:27:35.332 04:11:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:35.332 04:11:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:35.332 04:11:10 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:35.332 04:11:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:35.332 04:11:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.332 04:11:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:35.332 04:11:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.332 04:11:10 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:27:35.332 00:27:35.332 real 0m24.928s 00:27:35.332 user 0m50.236s 00:27:35.332 sys 0m5.404s 00:27:35.332 04:11:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:35.591 04:11:10 -- common/autotest_common.sh@10 -- # set +x 00:27:35.591 ************************************ 00:27:35.591 END TEST nvmf_abort_qd_sizes 00:27:35.591 ************************************ 00:27:35.591 04:11:10 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:27:35.591 04:11:10 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:27:35.591 04:11:10 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:27:35.591 04:11:10 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:27:35.591 04:11:10 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:27:35.591 04:11:10 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:27:35.591 04:11:10 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:27:35.591 04:11:10 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:27:35.591 04:11:10 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:27:35.591 04:11:10 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:27:35.591 04:11:10 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:27:35.591 04:11:10 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:27:35.591 04:11:10 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:27:35.591 04:11:10 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:27:35.591 04:11:10 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:27:35.591 04:11:10 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:27:35.591 04:11:10 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:27:35.591 04:11:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:35.591 04:11:10 -- common/autotest_common.sh@10 -- # set +x 00:27:35.591 04:11:10 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:27:35.591 04:11:10 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:27:35.591 04:11:10 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:27:35.591 04:11:10 -- common/autotest_common.sh@10 -- # set +x 00:27:37.494 INFO: APP EXITING 00:27:37.494 INFO: killing all VMs 00:27:37.494 INFO: killing vhost app 00:27:37.494 INFO: EXIT DONE 00:27:38.062 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:38.062 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:27:38.062 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:27:39.017 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:39.017 Cleaning 00:27:39.017 Removing: /var/run/dpdk/spdk0/config 00:27:39.017 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:39.017 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:39.017 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:39.017 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:39.017 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:39.017 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:39.017 Removing: /var/run/dpdk/spdk1/config 00:27:39.017 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:39.017 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:39.017 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:39.017 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:39.017 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:39.017 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:39.017 Removing: /var/run/dpdk/spdk2/config 00:27:39.017 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:39.017 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:39.017 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:39.017 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:39.017 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:39.017 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:39.017 Removing: /var/run/dpdk/spdk3/config 00:27:39.017 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:39.017 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:39.017 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:39.017 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:39.017 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:39.017 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:39.017 Removing: /var/run/dpdk/spdk4/config 00:27:39.017 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:39.017 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:39.017 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:39.017 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:39.017 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:39.017 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:39.017 Removing: /dev/shm/nvmf_trace.0 00:27:39.017 Removing: /dev/shm/spdk_tgt_trace.pid55565 00:27:39.017 Removing: /var/run/dpdk/spdk0 00:27:39.017 Removing: /var/run/dpdk/spdk1 00:27:39.017 Removing: /var/run/dpdk/spdk2 00:27:39.017 Removing: /var/run/dpdk/spdk3 00:27:39.017 Removing: /var/run/dpdk/spdk4 00:27:39.017 Removing: /var/run/dpdk/spdk_pid55413 00:27:39.017 Removing: /var/run/dpdk/spdk_pid55565 00:27:39.017 Removing: /var/run/dpdk/spdk_pid55886 00:27:39.017 Removing: /var/run/dpdk/spdk_pid56161 00:27:39.017 Removing: /var/run/dpdk/spdk_pid56344 00:27:39.017 Removing: /var/run/dpdk/spdk_pid56433 00:27:39.017 Removing: /var/run/dpdk/spdk_pid56532 00:27:39.017 Removing: /var/run/dpdk/spdk_pid56634 00:27:39.017 Removing: /var/run/dpdk/spdk_pid56667 00:27:39.017 Removing: /var/run/dpdk/spdk_pid56702 00:27:39.017 Removing: /var/run/dpdk/spdk_pid56771 00:27:39.017 Removing: /var/run/dpdk/spdk_pid56875 00:27:39.017 Removing: /var/run/dpdk/spdk_pid57507 00:27:39.017 Removing: /var/run/dpdk/spdk_pid57571 00:27:39.017 Removing: /var/run/dpdk/spdk_pid57640 00:27:39.017 Removing: /var/run/dpdk/spdk_pid57668 00:27:39.017 Removing: /var/run/dpdk/spdk_pid57747 00:27:39.017 Removing: /var/run/dpdk/spdk_pid57779 00:27:39.017 Removing: /var/run/dpdk/spdk_pid57879 00:27:39.017 Removing: /var/run/dpdk/spdk_pid57912 00:27:39.017 Removing: /var/run/dpdk/spdk_pid57963 00:27:39.017 Removing: /var/run/dpdk/spdk_pid57993 00:27:39.017 Removing: /var/run/dpdk/spdk_pid58045 00:27:39.017 Removing: /var/run/dpdk/spdk_pid58075 00:27:39.017 Removing: /var/run/dpdk/spdk_pid58234 00:27:39.017 Removing: /var/run/dpdk/spdk_pid58275 00:27:39.017 Removing: /var/run/dpdk/spdk_pid58351 00:27:39.017 Removing: /var/run/dpdk/spdk_pid58426 00:27:39.017 Removing: /var/run/dpdk/spdk_pid58456 00:27:39.017 Removing: /var/run/dpdk/spdk_pid58515 00:27:39.017 Removing: /var/run/dpdk/spdk_pid58534 00:27:39.017 Removing: /var/run/dpdk/spdk_pid58574 00:27:39.017 Removing: /var/run/dpdk/spdk_pid58588 00:27:39.017 Removing: /var/run/dpdk/spdk_pid58628 00:27:39.017 Removing: /var/run/dpdk/spdk_pid58648 00:27:39.017 Removing: /var/run/dpdk/spdk_pid58682 00:27:39.017 Removing: /var/run/dpdk/spdk_pid58702 00:27:39.277 Removing: /var/run/dpdk/spdk_pid58736 00:27:39.277 Removing: /var/run/dpdk/spdk_pid58756 00:27:39.277 Removing: /var/run/dpdk/spdk_pid58790 00:27:39.277 Removing: /var/run/dpdk/spdk_pid58810 00:27:39.277 Removing: /var/run/dpdk/spdk_pid58844 00:27:39.277 Removing: /var/run/dpdk/spdk_pid58864 00:27:39.277 Removing: /var/run/dpdk/spdk_pid58893 00:27:39.277 Removing: /var/run/dpdk/spdk_pid58918 00:27:39.277 Removing: /var/run/dpdk/spdk_pid58947 00:27:39.277 Removing: /var/run/dpdk/spdk_pid58966 00:27:39.277 Removing: /var/run/dpdk/spdk_pid59001 00:27:39.277 Removing: /var/run/dpdk/spdk_pid59015 00:27:39.277 Removing: /var/run/dpdk/spdk_pid59055 00:27:39.277 Removing: /var/run/dpdk/spdk_pid59069 00:27:39.277 Removing: /var/run/dpdk/spdk_pid59109 00:27:39.277 Removing: /var/run/dpdk/spdk_pid59123 00:27:39.277 Removing: /var/run/dpdk/spdk_pid59154 00:27:39.277 Removing: /var/run/dpdk/spdk_pid59177 00:27:39.277 Removing: /var/run/dpdk/spdk_pid59206 00:27:39.277 Removing: /var/run/dpdk/spdk_pid59231 00:27:39.277 Removing: /var/run/dpdk/spdk_pid59262 00:27:39.277 Removing: /var/run/dpdk/spdk_pid59282 00:27:39.277 Removing: /var/run/dpdk/spdk_pid59316 00:27:39.277 Removing: /var/run/dpdk/spdk_pid59336 00:27:39.277 Removing: /var/run/dpdk/spdk_pid59370 00:27:39.277 Removing: /var/run/dpdk/spdk_pid59393 00:27:39.277 Removing: /var/run/dpdk/spdk_pid59432 00:27:39.277 Removing: /var/run/dpdk/spdk_pid59455 00:27:39.277 Removing: /var/run/dpdk/spdk_pid59487 00:27:39.277 Removing: /var/run/dpdk/spdk_pid59506 00:27:39.277 Removing: /var/run/dpdk/spdk_pid59543 00:27:39.277 Removing: /var/run/dpdk/spdk_pid59562 00:27:39.277 Removing: /var/run/dpdk/spdk_pid59598 00:27:39.277 Removing: /var/run/dpdk/spdk_pid59675 00:27:39.277 Removing: /var/run/dpdk/spdk_pid59794 00:27:39.277 Removing: /var/run/dpdk/spdk_pid60224 00:27:39.277 Removing: /var/run/dpdk/spdk_pid67210 00:27:39.277 Removing: /var/run/dpdk/spdk_pid67553 00:27:39.277 Removing: /var/run/dpdk/spdk_pid69965 00:27:39.277 Removing: /var/run/dpdk/spdk_pid70357 00:27:39.277 Removing: /var/run/dpdk/spdk_pid70619 00:27:39.277 Removing: /var/run/dpdk/spdk_pid70670 00:27:39.277 Removing: /var/run/dpdk/spdk_pid70938 00:27:39.277 Removing: /var/run/dpdk/spdk_pid70946 00:27:39.277 Removing: /var/run/dpdk/spdk_pid71003 00:27:39.277 Removing: /var/run/dpdk/spdk_pid71057 00:27:39.277 Removing: /var/run/dpdk/spdk_pid71117 00:27:39.277 Removing: /var/run/dpdk/spdk_pid71161 00:27:39.277 Removing: /var/run/dpdk/spdk_pid71163 00:27:39.277 Removing: /var/run/dpdk/spdk_pid71194 00:27:39.277 Removing: /var/run/dpdk/spdk_pid71232 00:27:39.277 Removing: /var/run/dpdk/spdk_pid71234 00:27:39.277 Removing: /var/run/dpdk/spdk_pid71292 00:27:39.277 Removing: /var/run/dpdk/spdk_pid71356 00:27:39.277 Removing: /var/run/dpdk/spdk_pid71411 00:27:39.277 Removing: /var/run/dpdk/spdk_pid71449 00:27:39.277 Removing: /var/run/dpdk/spdk_pid71462 00:27:39.277 Removing: /var/run/dpdk/spdk_pid71482 00:27:39.277 Removing: /var/run/dpdk/spdk_pid71783 00:27:39.277 Removing: /var/run/dpdk/spdk_pid71941 00:27:39.277 Removing: /var/run/dpdk/spdk_pid72211 00:27:39.277 Removing: /var/run/dpdk/spdk_pid72260 00:27:39.277 Removing: /var/run/dpdk/spdk_pid72649 00:27:39.277 Removing: /var/run/dpdk/spdk_pid73189 00:27:39.277 Removing: /var/run/dpdk/spdk_pid73619 00:27:39.277 Removing: /var/run/dpdk/spdk_pid74607 00:27:39.277 Removing: /var/run/dpdk/spdk_pid75603 00:27:39.277 Removing: /var/run/dpdk/spdk_pid75715 00:27:39.277 Removing: /var/run/dpdk/spdk_pid75783 00:27:39.277 Removing: /var/run/dpdk/spdk_pid77275 00:27:39.277 Removing: /var/run/dpdk/spdk_pid77524 00:27:39.277 Removing: /var/run/dpdk/spdk_pid77979 00:27:39.277 Removing: /var/run/dpdk/spdk_pid78080 00:27:39.277 Removing: /var/run/dpdk/spdk_pid78231 00:27:39.277 Removing: /var/run/dpdk/spdk_pid78281 00:27:39.277 Removing: /var/run/dpdk/spdk_pid78322 00:27:39.277 Removing: /var/run/dpdk/spdk_pid78372 00:27:39.277 Removing: /var/run/dpdk/spdk_pid78531 00:27:39.277 Removing: /var/run/dpdk/spdk_pid78678 00:27:39.536 Removing: /var/run/dpdk/spdk_pid78943 00:27:39.536 Removing: /var/run/dpdk/spdk_pid79066 00:27:39.536 Removing: /var/run/dpdk/spdk_pid79487 00:27:39.536 Removing: /var/run/dpdk/spdk_pid79877 00:27:39.536 Removing: /var/run/dpdk/spdk_pid79880 00:27:39.536 Removing: /var/run/dpdk/spdk_pid82137 00:27:39.536 Removing: /var/run/dpdk/spdk_pid82452 00:27:39.536 Removing: /var/run/dpdk/spdk_pid82973 00:27:39.536 Removing: /var/run/dpdk/spdk_pid82981 00:27:39.536 Removing: /var/run/dpdk/spdk_pid83333 00:27:39.536 Removing: /var/run/dpdk/spdk_pid83347 00:27:39.536 Removing: /var/run/dpdk/spdk_pid83361 00:27:39.536 Removing: /var/run/dpdk/spdk_pid83398 00:27:39.536 Removing: /var/run/dpdk/spdk_pid83404 00:27:39.536 Removing: /var/run/dpdk/spdk_pid83542 00:27:39.536 Removing: /var/run/dpdk/spdk_pid83554 00:27:39.536 Removing: /var/run/dpdk/spdk_pid83658 00:27:39.536 Removing: /var/run/dpdk/spdk_pid83660 00:27:39.536 Removing: /var/run/dpdk/spdk_pid83768 00:27:39.536 Removing: /var/run/dpdk/spdk_pid83770 00:27:39.536 Removing: /var/run/dpdk/spdk_pid84249 00:27:39.536 Removing: /var/run/dpdk/spdk_pid84302 00:27:39.536 Removing: /var/run/dpdk/spdk_pid84449 00:27:39.536 Removing: /var/run/dpdk/spdk_pid84572 00:27:39.536 Removing: /var/run/dpdk/spdk_pid84979 00:27:39.536 Removing: /var/run/dpdk/spdk_pid85226 00:27:39.536 Removing: /var/run/dpdk/spdk_pid85729 00:27:39.536 Removing: /var/run/dpdk/spdk_pid86292 00:27:39.536 Removing: /var/run/dpdk/spdk_pid86762 00:27:39.536 Removing: /var/run/dpdk/spdk_pid86851 00:27:39.536 Removing: /var/run/dpdk/spdk_pid86937 00:27:39.536 Removing: /var/run/dpdk/spdk_pid87034 00:27:39.536 Removing: /var/run/dpdk/spdk_pid87186 00:27:39.536 Removing: /var/run/dpdk/spdk_pid87278 00:27:39.536 Removing: /var/run/dpdk/spdk_pid87364 00:27:39.536 Removing: /var/run/dpdk/spdk_pid87455 00:27:39.536 Removing: /var/run/dpdk/spdk_pid87820 00:27:39.536 Removing: /var/run/dpdk/spdk_pid88526 00:27:39.536 Removing: /var/run/dpdk/spdk_pid89891 00:27:39.536 Removing: /var/run/dpdk/spdk_pid90092 00:27:39.536 Removing: /var/run/dpdk/spdk_pid90377 00:27:39.536 Removing: /var/run/dpdk/spdk_pid90688 00:27:39.536 Removing: /var/run/dpdk/spdk_pid91230 00:27:39.536 Removing: /var/run/dpdk/spdk_pid91241 00:27:39.536 Removing: /var/run/dpdk/spdk_pid91615 00:27:39.536 Removing: /var/run/dpdk/spdk_pid91776 00:27:39.536 Removing: /var/run/dpdk/spdk_pid91933 00:27:39.536 Removing: /var/run/dpdk/spdk_pid92030 00:27:39.536 Removing: /var/run/dpdk/spdk_pid92192 00:27:39.536 Removing: /var/run/dpdk/spdk_pid92302 00:27:39.536 Removing: /var/run/dpdk/spdk_pid92983 00:27:39.536 Removing: /var/run/dpdk/spdk_pid93020 00:27:39.536 Removing: /var/run/dpdk/spdk_pid93055 00:27:39.536 Removing: /var/run/dpdk/spdk_pid93304 00:27:39.536 Removing: /var/run/dpdk/spdk_pid93335 00:27:39.536 Removing: /var/run/dpdk/spdk_pid93371 00:27:39.536 Clean 00:27:39.795 killing process with pid 49810 00:27:39.795 killing process with pid 49813 00:27:39.795 04:11:14 -- common/autotest_common.sh@1446 -- # return 0 00:27:39.795 04:11:14 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:27:39.795 04:11:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:39.795 04:11:14 -- common/autotest_common.sh@10 -- # set +x 00:27:39.795 04:11:14 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:27:39.795 04:11:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:39.795 04:11:14 -- common/autotest_common.sh@10 -- # set +x 00:27:39.795 04:11:14 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:39.795 04:11:14 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:39.795 04:11:14 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:39.795 04:11:14 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:27:39.795 04:11:14 -- spdk/autotest.sh@383 -- # hostname 00:27:39.795 04:11:14 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:40.054 geninfo: WARNING: invalid characters removed from testname! 00:28:02.004 04:11:36 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:05.288 04:11:39 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:07.190 04:11:42 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:09.721 04:11:44 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:11.623 04:11:46 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:13.524 04:11:48 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:16.053 04:11:50 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:16.053 04:11:50 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:28:16.053 04:11:50 -- common/autotest_common.sh@1690 -- $ lcov --version 00:28:16.053 04:11:50 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:28:16.053 04:11:50 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:28:16.053 04:11:50 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:28:16.053 04:11:50 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:28:16.053 04:11:50 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:28:16.053 04:11:50 -- scripts/common.sh@335 -- $ IFS=.-: 00:28:16.053 04:11:50 -- scripts/common.sh@335 -- $ read -ra ver1 00:28:16.053 04:11:50 -- scripts/common.sh@336 -- $ IFS=.-: 00:28:16.053 04:11:50 -- scripts/common.sh@336 -- $ read -ra ver2 00:28:16.053 04:11:50 -- scripts/common.sh@337 -- $ local 'op=<' 00:28:16.053 04:11:50 -- scripts/common.sh@339 -- $ ver1_l=2 00:28:16.053 04:11:50 -- scripts/common.sh@340 -- $ ver2_l=1 00:28:16.053 04:11:50 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:28:16.053 04:11:50 -- scripts/common.sh@343 -- $ case "$op" in 00:28:16.053 04:11:50 -- scripts/common.sh@344 -- $ : 1 00:28:16.053 04:11:50 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:28:16.053 04:11:50 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:16.053 04:11:50 -- scripts/common.sh@364 -- $ decimal 1 00:28:16.053 04:11:50 -- scripts/common.sh@352 -- $ local d=1 00:28:16.053 04:11:50 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:28:16.053 04:11:50 -- scripts/common.sh@354 -- $ echo 1 00:28:16.053 04:11:50 -- scripts/common.sh@364 -- $ ver1[v]=1 00:28:16.053 04:11:50 -- scripts/common.sh@365 -- $ decimal 2 00:28:16.053 04:11:50 -- scripts/common.sh@352 -- $ local d=2 00:28:16.053 04:11:50 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:28:16.053 04:11:50 -- scripts/common.sh@354 -- $ echo 2 00:28:16.053 04:11:50 -- scripts/common.sh@365 -- $ ver2[v]=2 00:28:16.053 04:11:50 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:28:16.053 04:11:50 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:28:16.053 04:11:50 -- scripts/common.sh@367 -- $ return 0 00:28:16.053 04:11:50 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:16.053 04:11:50 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:28:16.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.053 --rc genhtml_branch_coverage=1 00:28:16.053 --rc genhtml_function_coverage=1 00:28:16.053 --rc genhtml_legend=1 00:28:16.053 --rc geninfo_all_blocks=1 00:28:16.053 --rc geninfo_unexecuted_blocks=1 00:28:16.053 00:28:16.053 ' 00:28:16.053 04:11:50 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:28:16.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.053 --rc genhtml_branch_coverage=1 00:28:16.053 --rc genhtml_function_coverage=1 00:28:16.053 --rc genhtml_legend=1 00:28:16.053 --rc geninfo_all_blocks=1 00:28:16.053 --rc geninfo_unexecuted_blocks=1 00:28:16.053 00:28:16.053 ' 00:28:16.053 04:11:50 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:28:16.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.053 --rc genhtml_branch_coverage=1 00:28:16.053 --rc genhtml_function_coverage=1 00:28:16.053 --rc genhtml_legend=1 00:28:16.053 --rc geninfo_all_blocks=1 00:28:16.053 --rc geninfo_unexecuted_blocks=1 00:28:16.053 00:28:16.053 ' 00:28:16.053 04:11:50 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:28:16.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.053 --rc genhtml_branch_coverage=1 00:28:16.053 --rc genhtml_function_coverage=1 00:28:16.053 --rc genhtml_legend=1 00:28:16.053 --rc geninfo_all_blocks=1 00:28:16.053 --rc geninfo_unexecuted_blocks=1 00:28:16.053 00:28:16.053 ' 00:28:16.053 04:11:50 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:16.053 04:11:50 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:16.053 04:11:50 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:16.053 04:11:50 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:16.053 04:11:50 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.053 04:11:50 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.053 04:11:50 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.053 04:11:50 -- paths/export.sh@5 -- $ export PATH 00:28:16.053 04:11:50 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.053 04:11:50 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:28:16.053 04:11:50 -- common/autobuild_common.sh@440 -- $ date +%s 00:28:16.053 04:11:50 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1731039110.XXXXXX 00:28:16.053 04:11:50 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1731039110.n15LLh 00:28:16.053 04:11:50 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:28:16.053 04:11:50 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:28:16.053 04:11:50 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:28:16.053 04:11:50 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:28:16.053 04:11:50 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:28:16.053 04:11:50 -- common/autobuild_common.sh@456 -- $ get_config_params 00:28:16.053 04:11:50 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:28:16.053 04:11:50 -- common/autotest_common.sh@10 -- $ set +x 00:28:16.053 04:11:50 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang' 00:28:16.053 04:11:50 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:28:16.053 04:11:50 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:28:16.053 04:11:50 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:16.053 04:11:50 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:28:16.053 04:11:50 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:16.053 04:11:50 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:16.053 04:11:50 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:16.053 04:11:50 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:16.053 04:11:50 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:16.053 04:11:50 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:16.053 + [[ -n 5239 ]] 00:28:16.053 + sudo kill 5239 00:28:16.063 [Pipeline] } 00:28:16.079 [Pipeline] // timeout 00:28:16.085 [Pipeline] } 00:28:16.100 [Pipeline] // stage 00:28:16.106 [Pipeline] } 00:28:16.120 [Pipeline] // catchError 00:28:16.130 [Pipeline] stage 00:28:16.132 [Pipeline] { (Stop VM) 00:28:16.145 [Pipeline] sh 00:28:16.426 + vagrant halt 00:28:19.752 ==> default: Halting domain... 00:28:26.329 [Pipeline] sh 00:28:26.608 + vagrant destroy -f 00:28:29.142 ==> default: Removing domain... 00:28:29.413 [Pipeline] sh 00:28:29.694 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:28:29.704 [Pipeline] } 00:28:29.718 [Pipeline] // stage 00:28:29.724 [Pipeline] } 00:28:29.738 [Pipeline] // dir 00:28:29.743 [Pipeline] } 00:28:29.758 [Pipeline] // wrap 00:28:29.764 [Pipeline] } 00:28:29.777 [Pipeline] // catchError 00:28:29.786 [Pipeline] stage 00:28:29.788 [Pipeline] { (Epilogue) 00:28:29.801 [Pipeline] sh 00:28:30.083 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:35.364 [Pipeline] catchError 00:28:35.367 [Pipeline] { 00:28:35.381 [Pipeline] sh 00:28:35.671 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:35.671 Artifacts sizes are good 00:28:35.679 [Pipeline] } 00:28:35.694 [Pipeline] // catchError 00:28:35.706 [Pipeline] archiveArtifacts 00:28:35.713 Archiving artifacts 00:28:35.830 [Pipeline] cleanWs 00:28:35.843 [WS-CLEANUP] Deleting project workspace... 00:28:35.843 [WS-CLEANUP] Deferred wipeout is used... 00:28:35.849 [WS-CLEANUP] done 00:28:35.851 [Pipeline] } 00:28:35.866 [Pipeline] // stage 00:28:35.873 [Pipeline] } 00:28:35.887 [Pipeline] // node 00:28:35.892 [Pipeline] End of Pipeline 00:28:35.936 Finished: SUCCESS